Background
ASI distributors rely on our platform to generate quotes and place orders for products they present to clients for events. These distributors dedicate significant time to configuring products, handling multiple customer orders simultaneously, and overseeing product deliveries. Our dedicated management tool streamlines the entire order process, empowering distributors to efficiently manage their operations from start to finish. Our goal was to reassess the order management process by identifying pain points and enhancing efficiency and usability, ensuring a faster and more seamless ordering experience for our users.
Challenge
We aimed to refine our designs through user testing, but time constraints made it challenging to involve our distributors directly. To ensure meaningful feedback, we needed test participants who closely resembled our distributor user base. Screening users on Usertesting.com was a crucial step in our study, allowing us to gather insights that aligned with the perspectives our distributors would provide. This process did extend the timeline but was necessary to provide relative data.
My Role and Reponsibility
Methodology
The goal of our study was to identify which workflows to improve for our distributors that would have the highest impact of order performance. We outlined the following research questions to guide us:
Identify key workflows that distributors complete within the order management system.
Establish a baseline by evaluating the current user experience for each workflow.
Refine the design through iterative testing and compare results to measure improvements.
Before planning our usability testing, I conducted a heuristic evaluation of the order management system within our platform using Jako Nielsen’s 10 usability heuristics as a framework. This evaluation helped identify key elements and processes that distributors perform, ensuring we tested the most critical aspects of their workflow. Our goal was to assess the user journey of a distributor processing an order from start to finish, highlighting potential usability issues. We chose to analyze the full process of creating and completing an order that a distributor would create to fully understand the holistic experience of our management system. The heuristics we used are listed below:
Visibility of System Status
Match Between the System and the Real World
User Control and Freedom
Consistency and Standards
Error Prevention
Recognition Rather than Recall
Flexibility and Efficiency of Use
Aesthetic and Minimalist Design
Help Users Recognize, Diagnose, and Recover from Errors
Help and Documentation
Our heuristic evaluation revealed several critical usability issues within the order management process, particularly related to three key heuristics: Visibility of System Status, Consistency and Standards, and Recognition Rather Than Recall. Many system actions lacked visible feedback, leaving users unaware of ongoing or completed events. Messaging was inconsistent across identical actions, and in some instances, data displayed was incorrect in one area while accurate in another. Additionally, users were often presented with multiple options simultaneously, without clear guidance or prioritization, which increased cognitive load and hindered decision-making. We systematically documented the locations and impact of each usability issue and identified four high-priority processes within the order management system to evaluate further through usability testing.
Participant Screening
Our primary challenge was determining how to conduct quantitative usability testing quickly and efficiently. While we initially aimed to test with our actual user base, time constraints required us to source participants externally. We opted to use UserTesting’s participant pool and developed a detailed screener to closely match our existing distributor user base. The screener included multiple questions assessing participants’ job titles, professional responsibilities, familiarity with competitors, experie nce selling branded products or services, and knowledge of the promotional products industry. Our particpants must have selected “Helping clients who want to market their brands on various products or materials” since that is the true purpose of a distributor as ASI. Based on these criteria, we screened over 300 candidates and successfully recruited 100 qualified participants. This allowed us to run faster testing iterations with a representative sample.
Quantitative Usability Testing
Baseline tests were conducted on UserTesting.com using a Figma prototype. We selected four key order management processes—common distributor tasks with the most severe heuristic violations. Each process was tested individually with 10 participants per process, totaling 40 participants in the baseline test. To ensure fresh perspectives, we used different participants for each round of testing.
Each test included multiple tasks designed to be efficient and unbiased, ensuring participants could navigate the system naturally without being guided toward a specific outcome.
For each task, we recorded four key measurements:
First Click – Did the participant successfully click on the correct location first?
Success – Did the participant successfully complete the task?
Time on Task – How long did it take to complete the task?
Perceived Ease Score – On a scale of 1 (Very Difficult) to 5 (Very Easy), how would the participant rate this task?
These metrics were selected to effectively capture both the participant's experience and performance, providing a clear picture of the usability of our order management platform. For each test, we calculated the average score of the four measurements for each task and the overall test, then recorded them in an Excel document for future comparison. For the purpose of this case study, we will review how we tested the process of sending a purchase order.
Baseline test results:
First Click: 45% succesfully clicked the correct location first
Success: 75% of participants successfully completed the task
Time on Task: 2 minutes
Perceived Ease Score: Participant rated the difficuly 3.5 / 5
Our baseline usability test revealed that participants were initially unsure where to begin when attempting to send a purchase order. Even after locating the starting point, many struggled to complete the subsequent steps, encountering significant usability challenges. In response, our designer spent two weeks refining the interface based on participant feedback. To measure progress consistently, we used the same tasks and metrics in each round of testing, evaluating 10 new participants per iteration via UserTesting.com.
Improvement test 1 results:
First Click: 75% succesfully clicked the correct location first
Success: 100% of participants successfully completed the task
Time on Task: 57 seconds
Perceived Ease Score: Participant rated the difficuly 4.5 / 5
The second round of testing showed a 25% improvement in task completion rates, indicating clearer user guidance. First-click accuracy also improved by 30%, though it remained below our target threshold. Based on these insights, our designer further refined the process flow, and we conducted a third round of testing with an additional 40 participants.
Improvement test 2 results:
First Click: 90% succesfully clicked the correct location first
Success: 95% of participants successfully completed the task
Time on Task: 1 minute 35 seconds
Perceived Ease Score: Participant rated the difficuly 4.4 / 5
In the third round of testing, we observed an improvement in first-click accuracy, with a slight decrease in overall task success rate. The most notable change was a 37-second increase in average time on task. Given our time constraints, we prioritized improvements in first-click accuracy and success rate, deferring optimization of task efficiency for a future iteration. Across all three rounds—including the baseline and two iterations—we tested with a total of 120 participants, which directly informed our final design and development decisions.
Reflection and Takeaways:
Iterative qualitative usability testing allowed us to identify issues in our current experience and guided improvements in subsequent tests. However, the testing could have been more robust with a larger pool of participants per iteration. Due to time and budget constraints, we were limited to 10 participants per test. Another area for improvement was incorporating our own users into the testing process. Since they spend most of their workday on our platform, their feedback would have been more relevant and insightful. As we move forward with implementing our design, we plan to conduct usability testing with our actual users to gather final feedback.