Background

As an e-commerce retailer, finding the right products is critical to growth. The market research team conducts critical research to determine if items (SKUs) from a cost list supplied by current or potential vendors fits Spreetail’s criteria for success.

However, this team relied on brittle, complicated workflows stitched together with multiple tools such as Excel and Hupspot. As the sole designer on the assortment growth squad, I was tasked with researching and building tools to replace or supplement these workflows.

Discovery, or “Yikes, there has got to be a better way.”

Early research had two goals:

  • Understand the complexities of the market research team’s current processes.

  • Determine key areas of opportunity.

I conducted interviews with members of the market research team and other teams that use their deliverables, shadowed members of the market research team, and analyzed artifacts and deliverables from their current workflow.

Example of a potential UX flow for creating a new project.

Example of a potential UX flow for creating a new project.

A journey map of the market research team process.

A journey map of the market research team process.

Opportunity: Research should not be this scary

  • The current processes rely on brittle, extensive excel sheets.

    • It’s difficult to update changes to how decisions are made.

    • There’s a large learning curve for new hires.

    • Users outside of market research are intimidated by the research document and don’t use it as robustly as they could.

  • A large amount of time is spent cleaning up data that is pulled from Keepa before any analysis can be done. Data can be unreliable, duplicated, or misattributed.

  • Market research needs to keep track of a variety of data sources – project information can be communicated via Hubspot, Helpdesk, Teams, and e-mail. Market research may keep notes about projects or items in Hubspot, Notepad, or physical notebooks.

Let’s make researchers jobs way easier. Say, 33%?

After early research, it was time to further loop in stakeholders, market research team members, and engineers. I created possible visions for the future of SKU Scout, and used my research to guide conversations around potential MVPs and iterations of the product. I provided key takeaways and understanding of the users via remote meetings, Confluence documents, and Figma prototypes.

This UI surfaces relevant product information while allowing the user to focus on their core job – answering the subjective questions about the product.

V1 product goals

  • Move market research workflows from Excel document to a bespoke tool in Toolkit. 

  • Make it easy to implement updates to research logic.

  • Minimize the time market research spends doing rote data cleanup and entry.

  • Make it easy for market research to convey information about their research process and information about specific items.

  • Allow market research to focus on where their greatest value add is – expert understanding of what makes particular items successful for Spreetail and what needs to be done to create a successful vendor order.

User testing: Make it easy to understand what data they need

After establishing stakeholder buy-in and creating a shared understanding of the problem space, the squad determined direction and scope for the project. I identified critical features and began creating plans to quickly and incrementally user test these flows. User testing was scoped, with clear objectives and prototypes at the appropriate fidelity to answer the questions at hand.

Testing shared rank

The goal of the “shared rank” workflow was to simplify the extensive data entry process when an item is part of a shared listing on Amazon. While the plugin Helium often provided the review percentages to the user, occasionally the user only knew the number of reviews and would have to calculate the percentage for as many as thirty variations in order to determine the rank percentage, then enter the relevant percentages into the excel doc.

I tested a prototype of just this portion of the user flow with low visual fidelity, with the main goal being to test types of inputs and the information architecture of input.. Prompting was left fairly open ended – I asked users how they would enter in shared rank data for two specific listings in the interface. Further questions were asked to determine their understanding of microcopy and expectations through the process.

Low visual fidelity wireframe for testing.

Example of a low visual fidelity wireframe for testing.

Findings: Better UX for less repeated work

  • While I initially surfaced inputs for both the review number and the rating percentage, in testing users assumed they had to enter both data points, and didn’t realize the system could calculate the shared rank. By adding a button to select how they’d like to enter the data and updating the copy to reflect that, users found the workflow more intuitive.

  • Users thought they would need to enter data for every variation on the listing, and were confused if not all variations were listed. After adding additional information around variations found vs. variations on the cost list provided by the vendor, users understood they only need to enter information for variations on the cost list.

The end result for shared rank review testing.

V1 outcomes

You can check out a clickable prototype of the SKU Scout V1 here.

The average time to complete projects has been reduced by ~33%, from three hours to one hour.

Users note they feel cleaning up data is much simpler and they’re better able to focus on providing insights into products rather than scrubbing data.