Blog Image
Cindy Beggs

Cindy Beggs

Akendi Alumnus

The Case for Usability Testing

Every year we do lots of usability testing on products that are in various states of design and/or development. Many are already in market, and in each one we find, on average, at least 20 usability issues. Every time we do a test I’m reminded that usability testing is a wonderful thing a wonderfully misunderstood thing.  It’s a research method that enables us to de-risk design.

Done at the right time and in the right way, we can de-risk designs prior to launching them.  What does this mean?  It means that if we conduct usability testing of our products, things that will get in the way of users being able to use our products will be exposed.  Exposing these things ahead of launch, gives us the chance to address these things in a cost-effective way. Ideally, still prior to launch, we can try to fix these things and then retest to see if our fixes “work” – this is iterative design.  Iterative design is a wonderful thing too.

Usage Bugs

So why is it that we continue to have to justify, persuade, cajole, sometimes even fight, to get time in a product design process to do iterative design and testing?  If you were a software developer, would you ever write code, ship it and launch it without testing it first to see if there were bugs in it?  Of course not. So why do we continue to design UI’s largely based on our assumptions about how users will use them, ship these untested designs to developers to code and then launch without testing the UI to see if there are bugs in it?  Not functional bugs – usage bugs.

The software team has made sure everything is functioning: when a user clicks, the action that’s supposed to take place, takes place – they’ve fixed the bugs they found ahead of launch.  Usage bugs, though, are the kinds of things that can stop users in their tracks: labels that make no sense, groupings of information that don’t connect for users, task flows that don’t map to the way users work through tasks, or visuals that get in the way of understanding what to do. We’ve all encountered products that were designed with every good intention of “working”, and that are functionally sound, but that don’t “work” for users.  The reasons for this are any mix of many:

  • We don’t actually talk to or watch real users to understand how they use our products, let alone capture this information
  • Since we don’t talk to users directly, we don’t capture their scenarios of use – what they actually do with our product or similar products
  • We look at analytics and know where users go on our site/app, but we don’t know when, or why or how often, other than that lots of them or only a few click on a given page
  • We don’t know what differentiates the way one user uses our product from the way another uses …and we don’t articulate and capture which user is primary for our product
  • We take customer research – demographics, psychographics, attitude, insight, opinion based research – and think we have the whole picture in terms of knowing how to design for users
  • We design based on the way “I” would do it, not recognizing that “I” am not the user
  • We don’t plan properly ahead of the development schedule so we “run out of time” for research and testing

The Interesting Phenomenon that Happens When we Have Actually Done Testing

  • We identify usability issues but say that users will “learn” eventually how to use the system.

This is a bit of a cop-out, depending of course on the nature of the product we’re building, because yes, most users can learn how to use a product that wasn’t designed to meet their needs as well as it might have been, but why would we make them learn it the hard way, by struggling, if we don’t “have” to?  Moreover, depending on how mandatory it is for users to learn to use the product, can we afford to have them go elsewhere to find something “easier” to use? Learnability isn’t something we can measure effectively in a typical usability test so what users should learn and what they shouldn’t have to learn again becomes an opinion-based conversation.  Those who would point to learnability as justification for not fixing exposed usability issues may have a point in some cases, like complex software systems designed for very specific applications. The conversation about what and how much users “should” have to learn is one that must be addressed.

The Impact of Not Testing

Another reason why, even when we do test, we still launch products with usability issues.

  • We don’t articulate and capture design principles that will drive how we design, let alone how we will react to or fix usability issues even if we do have time for testing

So what can we do?  We do what those in market research did decades ago – continue to advocate for more rigour in our research specialty – user research.  We continue to justify, persuade, cajole, sometimes even fight, to get a proper product design plan in place that gives user research, design and testing the time it needs to be done well.  The impact of not doing this is too severe to not try, and eventually product designers and developers will learn this, the hard way.

Cindy Beggs

Cindy Beggs

Akendi Alumnus


Time limit is exhausted. Please reload CAPTCHA.

Learn how your comment data is used by viewing Akendi's Blog Privacy Policy

Related Articles

About Akendi

Akendi is a human experience design firm, leveraging equal parts experience research and creative design excellence. We provide strategic insights and analysis about customer and user behaviour and combine this knowledge with inspired design. The results enable organizations to improve effectiveness, engage users and provide remarkable customer experiences to their audiences.