Overview

ASI distributors rely on our platform to generate quotes and place orders for products they present to clients for events. These distributors dedicate significant time to configuring products, handling multiple customer orders simultaneously, and overseeing product deliveries. Our dedicated management tool streamlines the entire order process, empowering distributors to efficiently manage their operations from start to finish. Our goal was to reassess the order management process by identifying pain points and enhancing efficiency and usability, ensuring a faster and more seamless ordering experience for our users.

Challenge

We aimed to refine our designs through user testing, but time constraints made it challenging to involve our distributors directly. To ensure meaningful feedback, we needed test participants who closely resembled our distributor user base. Screening users on Usertesting.com was a crucial step in our study, allowing us to gather insights that aligned with the perspectives our distributors would provide. This process did extend the timeline but was necessary to provide relative data.


Role: UX Researcher

Duration: 4 months

Tools: Microsoft Teams, Miro, Figma

Research Objectives

  1. Identify key workflows that distributors complete within the order management system.

  2. Establish a baseline by evaluating the current user experience for each workflow.

  3. Refine the design through iterative testing and compare results to measure improvements.

Research Process

Before planning our usability testing, I conducted a heuristic evaluation of the order management system within our platform using Jako Nielsen’s 10 usability heuristics as a framework. This evaluation helped identify key elements and processes that distributors perform, ensuring we tested the most critical aspects of their workflow. Our goal was to assess the user journey of a distributor processing an order from start to finish, highlighting potential usability issues. We chose to analyze the full process of creating and completing an order that a distributor would create to fully understand the holistic experience of our management system.

Baseline tests were conducted on UserTesting.com using a Figma prototype. We screened 300+ candidates, recruiting 100 qualified participants to mirror our distributors for faster testing iterations. From this pool, we selected four key order management processes—common distributor tasks with the most severe heuristic violations. Each process was tested individually with 10 participants per iteration, totaling 40 participants per round. To ensure fresh perspectives, we used different participants for each test.

Each test included multiple tasks designed to be efficient and unbiased, ensuring participants could navigate the system naturally without being guided toward a specific outcome.

For each task, we recorded four key measurements:

  • First Click – Did the participant successfully click on the correct location first?

  • Success – Did the participant successfully complete the task?

  • Time on Task – How long did it take to complete the task?

  • Perceived Ease Score – On a scale of 1 (Very Difficult) to 5 (Very Easy), how would the participant rate this task?

These metrics were selected to effectively capture both the participant's experience and performance, providing a clear picture of the usability of our order management platform. For each test, we calculated the average score of the four measurements for each task and the overall test, then recorded them in an Excel document for future comparison.


After the baseline test, our designer spent two weeks refining the design based on participant feedback. Using the same questions and metrics for consistency, we tested 10 new participants per iteration on UserTesting.com.

The second test showed significant improvements in first-click accuracy, indicating better navigation. However, while participants rated the experience as ‘Easy,’ success rates were lower than expected. Our designer refined the process flow, and a third test followed with another 40 participants.

The third test saw minimal changes in first-click accuracy but improved success rates, though task completion times increased by 30–60 seconds. Balancing efficiency remained a priority, but the improved success rates validated our design. Across all tests, we evaluated the baseline and two iterations with 120 participants, guiding our final development decisions."


Reflection and Takeaways:

Iterative qualitative usability testing allowed us to identify issues in our current experience and guided improvements in subsequent tests. However, the testing could have been more robust with a larger pool of participants per iteration. Due to time and budget constraints, we were limited to 10 participants per test. Another area for improvement was incorporating our own users into the testing process. Since they spend most of their workday on our platform, their feedback would have been more relevant and insightful. As we move forward with implementing our design, we plan to conduct usability testing with our actual users to gather final feedback.