Case Study · Core Product · Information Architecture

Assignment
Library Search.

NoRedInk's Assignment Library is vast, containing thousands of pieces of content. That organization is vital to making the Assignment Library a critical and useful part of the NoRedInk experience, but not all users have the time or inclination to browse through it all. That's where Search comes in: both the universal fallback for all users and the first choice for some.

However, NoRedInk's search function left much to be desired. It was slow, buggy, and most crucially, did not return all of our content. A larger company goal to increase the assigning rates of writing and critical thinking skills eventually provided the opportunity to not only reach a business goal but improve the experience for our users. I worked with a product manager, curriculum specialist, and engineer to make this happen.

Goals of the Project

The overarching metric we wanted to increase, per company objectives, was the number of WACTS (writing and critical thinking skills) assignments created per teacher. We'd do this by ensuring that the Search experience surfaced writing content in a reliable and relevant way that gave teachers both the assignments they were looking for and those we thought would be useful to them. We also set secondary metrics we were hoping to move, including writing conversion per search session, overall assignment conversion, and adoption of search among Assignment Library users.

Qualitatively, we wanted teacher expectations for Search to be met, captured in the mantra: "Show me the most relevant, highest-leverage content first." Our current Search experience violated this routinely.

Problems to be Solved

To achieve those goals, we needed to articulate exactly what was wrong with our current Search experience. We found that Search accounted for 21% of Assignment Library sessions, but that only 6% of Search sessions converted to writing assignments. Teachers searching for grammar terms were not shown relevant related writing assignments even when they existed. Some of the key issues with the existing experience included:

  • Results were unweighted and always displayed alphabetically.
  • Writing content appeared last or not at all.
  • The text of individual writing prompts was not indexed.
  • Long, unsortable, unfaceted result lists forced teachers to sift through irrelevant items before finding anything usable.
  • Search loading took 5–10 seconds due to frontend execution.
  • Safari users could not use search at all due to browser incompatibility.

The net effect was that teachers searching for writing often assumed we didn't have what they needed, a frustrating experience for new users wanting to see what we had to offer, veteran users trying to find specific content, and NoRedInk's overall wish to be seen as more than a grammar tool.

Plan of Attack

The process to concretely improve Search ended up touching engineering, information architecture, user experience, and UI design. The first step was migrating from a slow frontend-based system to a backend system, eliminating performance issues, enabling full-library indexing, and unlocking relevance tuning.

With a new, highly configurable search system in place, we created a tiered weighting framework that ensured more relevant, more writing-focused content to be returned. High-leverage writing modules received additional weight, synonym groups and manual overrides honored teacher intent, and exact matches always outranked fuzzy matches.

The new backend framework allowed us to index individual writing prompts for the first time, opening up vast new swathes of writing content to the Search user. Finally, we redesigned the UI to offer a simple listing of clearly labeled, consistently formatted, easily filterable results with a low learning curve and simple operation.

Evolution of the Design

Over the course of the project, the design underwent considerable evolution, with nearly 30 different iterations. We iterated in tight coupling with results from our working prototype, as we adjusted parameters, weights, and tags. Early iterations took a naive approach, simply listing each kind of content as it appears in its original location in the Assignment Library. Concurrently, I experimented with whether content types should be grouped together, or whether a flat list made more sense.

I also tried out iconography, explicit headings, and sections to help distinguish between different kinds of content, and experimented with how much descriptive info to include per item versus just the name. I tried alternate layout options beyond a list, including grid-like and tabular options. These design paths ended up leading back to a more straightforward approach where the emphasis was placed on content rather than metadata or differentiation.

A particular point we went back and forth on was whether to artificially draw out writing results to give them greater visual prominence. After design and implementation iteration, we ultimately decided against it. Presenting the most relevant results remained our north star, and we felt that we could serve our business goals at the same time with proper weighting and tagging.

Architecture

A major element of the design process was determining how to display the heterogeneous content that existed in the Assignment Library. Practice Topics are usually selected via checkbox, and multiple Topics can be checked to form an assignment. Topics exist in Pathways and in Modules, where they appear alongside writing prompts grouped for relevancy. These Pathways and Modules were themselves useful search results.

Writing Prompts are different: each Prompt is an assignment unto itself and can't be combined with anything else. Choosing a Prompt means entering a one-way flow to customize and assign it. Passage Quizzes are Topics that can only be assigned as Quizzes. Search was also returning Prompt Groups and whole categories of prompts, natively presented as in-place expanders and independent pages.

I iterated from a place where each content type looked and functioned as it did natively in the Assignment Library, to one of greater uniformity, where eventually every content type appeared in the same format with a single action that took the user to a new page. This greatly lowered the cognitive lift of understanding and using the page and put the emphasis on the content rather than its logistics.

Release and Results

This project required tight collaboration across teams and a thorough QA phase. After partnering with Curriculum and Engineering to validate results across over twenty search archetypes, import tag data, and implement last-minute overrides, we performed dogfooding with internal teams, followed by a limited release to 10% of traffic, culminating in a wide release.

As enough data came in, we found encouraging results for our primary metric. Teachers who used Search assigned 1.93 WACTS assignments, versus the baseline of 1.78. Power users who searched 10+ times assigned 2.42 (versus 2.32 baseline). Teachers who used Search were nearly twice as likely to assign writing compared to those who didn't, a 13.5% year-over-year increase.

Our secondary metrics showed more modest gains, with one exception. Writing and overall conversion were essentially unchanged but adoption increased from 34% to 42%, a 23% increase. The lift came primarily from more teachers using Search at all, suggesting we should have more explicitly promoted the new experience. Grammar remained the dominant search behavior, but better surfacing writing drove incremental gains.

Conclusions

This project transformed a slow, unprioritized, and often misleading search experience into a fast, structured, writing-forward tool. The project demonstrated how technical, organizational, and experiential improvements can shift teacher behavior, and positions NoRedInk to continue evolving toward more intelligent, flexible, and writing-centered search experiences.

However, one feature alone can't totally shift user behavior and attitudes. The gains we saw were meaningful, but incremental. Teachers continue to think of NoRedInk as a grammar tool, and further shifting that perception is beyond the scope of any one project. Ultimately, we must meet the user where they are. By improving but not inflating writing's presence in Search, this project did just that while honoring business goals and user experience expectations.