Skip to main content

User experience (UX) testing has a reputation for being time-consuming and expensive.

This is probably because most hear “user testing” and think about men and women in white coats, a controlled environment and huge data sets to analyze. It’s unfortunate, but too often gathering empirical evidence from real users is left out of the design and development of a site or product because of this mindset. This needs to stop. We can do better. In the past it might have taken all of those things to get useful information, but today user experience testing can be fast, low cost and helps ensure that we’re making design and UX choices based on evidence, as opposed to assumptions.


Example: Testing for Category Comprehension

Our team was recently faced with a naming challenge for categories in an app design. We had three categories: “businesses,” “places,” and “events.” There was concern (and rightfully so) that examples of “businesses” and “places” would overlap, which could be confusing. We didn’t want users to look for a business, like a bar, under the category “places,” because for us, a “place” represented a non-commercial space, like a park. Instead of spending a lot of time and mental effort working through what we thought a user would expect, we turned to Mechanical Turk to test the naming convention. Mechanical Turk is a marketplace for finding people willing to complete tasks for you. You set the amount you are willing to pay and the number of users you are looking to have complete the task. The higher the spend, the quicker users gravitate to your test. To satisfy our needs, we conducted 5 different tests. We left the original naming convention as a control and added 4 additional options for “Places.” These alternative options were: “Hangouts,” “Destinations,” Activities and Recreation,” and “Parks and Recreation.” We then asked users to provide specific examples for each of the options they were presented. We set our spend at .50¢ per test submission and in less than 24 hours, had received our target number of responses, which was between 15 to 20 submissions per test.


The Results

The results quickly affirmed that our original term “Places” was the right one in this case. They showed that users were able to quickly associate the term with the types of places that we were hoping to put under that category. The other terms produced results that were commercial, completely off, or a better fit for “businesses.” We had our answer. We presented the findings to the team with an analysis of the results, and continued on with the design. What could have taken days and possibly weeks to discuss and consider, took less than 48 hours and allowed us to move forward with the confidence that the decision was backed by real data and not just our best guesses. User experience testing does not need to be a great effort that kills a budget. In the example above, real world evidence was used to validate the naming convention of categories. The costs were nominal (just under $50) and the test was short, and the outcome was a more confident and cohesive design based on real user feedback. It’s just a small example, but an important one that illustrates of how we can back up assumptions and insights with data. Moving to a build/test loop through the design and development of a product or site is the right direction.