Insights

Considerations for Running a Scalable User Testing Initiative

Tony Knaff is Associate Creative Director, UX

user testing failsLet’s face it, there’s a LOT that goes into software design and development. I’m sure we’re all familiar with the phrase, “If we don’t get this done, it’s not the end of the world.” Thankfully, most of us are not in a line of work where the choices we make decide the fate of the planet. However, I’m reminded of the incident last year in Hawaii when an employee accidentally initiated the state’s missile warning system when they meant to initiate a test of said warning system. The mistake was made due to a confusing user interface, and the gravity of the situation certainly put to test the phrase “it’s not the end of the world.” Your platform may not be controlling nukes, but if you are the one that has to answer for poor conversion rates, or another site shortcoming, it may as well be.

When approaching platform design, whether starting from scratch or building upon an existing experience, testing your interface and validating decisions is the keystone in the user-centered design process. These tests can increase understanding of the user and help alleviate any critical usability issues (like accidentally launching nukes)…and save the expense of reworking the interface if problems are found post-launch. Still, many teams opt to hold off on conducting user tests for a variety of reasons. They may wonder:

  • How do I know what tests to conduct and when to conduct them?
  • How do I justify the costs of user testing given the constraints of my budget?
  • How do I recruit the right participants for these tests?

Assessing User Testing “Costs”

When thinking about cost, it’s important to reiterate that allocating resources upfront can be significantly more cost-effective than fixing errors later – not to mention the untold costs to a brand’s reputation by going live with a subpar user experience cannot be underestimated.

A prioritization matrix can help track usability concerns and attribute a level of effort and impact. Tackling the low effort, high impact concerns first can help make efficient use of resources, while saving the high effort, low impact concerns for later.

Sample prioritization matrix

A variety of testing methods are available to accommodate even the leanest of budgets. Automated testing and simulated eye tracking can be setup and executed extremely quickly, providing near immediate results.

To ensure the greatest possible return on investment, save large-scale tests for clearly defined initiatives where you require a greater amount of qualitative and quantitative data. Start out with small-scale tests that can be executed quickly, with a smaller set of users. Nielsen Norman Group shows that testing even 5 users can provide statistically significant results. For example, instead of a single test of 15 users, try three iterative tests of 5 users each. This ensures that what is being tested is being refined with each subsequent test.

Incentivizing User Testing Participants

Even with the most perfectly planned study, the most important part of your study is recruiting the right participants. While some companies have a very invested customer base that they can rely on to provide feedback, others may find that getting people involved without providing incentives isn’t easy. The automated testing options mentioned earlier can eliminate these costs completely, but talking to actual users can provide the most insightful results. That’s where incentives come in.

It’s tempting to immediately consider coupons or discounts to participants, as those do not carry upfront costs. However, consider the goals of the test. What types of users are you looking to collect feedback from? It’s possible that the type of incentive influence the type (and quality) of user responding to the request. Discounts and coupons are primarily attractive to users looking to shop in the near future. They would be ineffective however if you’re recruiting users that have just completed a purchase, or those that have not purchased from you before. An incentive that is not tied to the tester’s brand has the added benefit of not potentially skewing participant results.

The amount of the incentive should also be adjusted according to the effort required. Participants that must travel to a specific testing location should see an increase in the incentive, while remote testers can be offered lower compensation. Also, consider the type of user you’re seeking. Highly specific needs, such as recruiting from a specific profession, will require a higher incentive than recruiting participants from the general public.

Finally, the length of time required should help determine the size of the incentive. While short, quick studies can maximize the participants focus, sometimes exercises may take upwards of 60 minutes or longer to complete. Incentives should be scaled up or down accordingly.

User Testing Takeaways

Creating a scalable user-testing strategy should be an open conversation between the user experience designers and the brand’s stakeholders. Staying flexible with small, iterative tests in the beginning can lead to a focused strategy that can scale up to accommodate larger testing initiatives. By spending time at the beginning of the project, you’ll acquire the information needed to design a platform that meets specific user needs while avoiding the costly pitfalls of launching a system that is difficult to use.