News

Stay in the loop with company updates and insights from our talented team.

The ROI of UX Research

ux / featured / usability

It would be tough for anyone to argue that user experience (UX) testing prior to launching a product is not beneficial – we only need to look at Healthcare.gov and Lawn Darts to see why. The success of design changes following a UX study can be easily assessed with traditional metrics like error rates and Likert scale satisfaction ratings. These traditional metrics make it easy for UX researchers to pinpoint successful and troublesome areas of a product. However, how do traditional metrics of user experience, like errors and satisfaction, impact the bottom line?

To see how tough that question is to answer, imagine you are proposing a usability study to a CEO. He has heard your spiel on assessing the user experience. You’ve advocated for 10% of the project’s design budget be spent on usability, just like the Nielsen Norman Group (1) advises. You think that the deal is done, when the CEO throws you a curveball and asks, “What is my return on investment?”

In any UX researcher’s tool kit, there should be a repertoire of techniques to assess a product, like creating personas, conducting in-lab usability tests, and coding qualitative feedback. But in addition, UX researchers should keep the answers to tough questions like these in their tool kit. Fortunately, a 2002 whitepaper by Aaron Marcus and Associates (2) highlights a number of factors that can be used as data-based responses (see Table 1).

Table 1

UX research is an up-front cost that saves companies money in the long run. The amount of money varies, but for data-driven CEOs, it is good to have a tangible example to reference. For example:

Mantei and Teorey (1988)(3) calculated how human factors research contributes to yearly savings. They estimate that in a company of 250 people, first year savings for introducing human factors elements in the software design process for intranet systems can be up to $193,000. See Table 2 for a breakdown of those costs.

Table 2

Let’s take a quick look at how they calculated the biggest number in Table 2, Error Reduction Cost. This calculation includes a few assumptions. When using this intranet system, it is assumed that employees encounter errors with a 2.5% probability, they can get through 20 tasks an hour, they complete tasks using the system for 3 hours a day, the average work month is 21.5 days, and the intranet system is used 12 months a year. The formula to calculate the number of errors per year is:

Calculation

These savings are for internal systems, but the cost of errors can be easily translated into consumer products as well. This is where traditional UX metrics, like error rates and Likert scale satisfaction questions come into play. Instead of a loss of productivity in the workplace, consumers lose satisfaction in your product and begin to look elsewhere for a service that can help them complete their task successfully (i.e., make purchases). This can lead to a decrease in sales, customers, and public perception. The exact amount of course varies and depends on the severity of the errors and the necessity of the product you are offering, but it easy to see, with this quick calculation, how a truly usable product can save lots of money.

When calculating cost savings for internal and external products, no matter what assumptions are made, UX testing at any stage of a product’s development is beneficial. The next time you need to answer tough questions about how traditional usability metrics impact the bottom line, remember these equations to prove the ROI of usability.

References:

  1. Usability 101: Introduction to Usability http://www.nngroup.com/articles/usability-101-introduction-to-usability/
  2. Marcus, A. (2002). Return on investment for usable user-interface design: Examples and statistics. Aaron Marcus and Associates, Inc. Whitepaper.
  3. Mantei, M. M., & Teorey, T. J. (1988). Cost/benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM, 31(4), 428-439.

comments powered by Disqus