Every year we do lots of usability testing on products that are in various states of design and/or development. Many are already in market, and in each one we find, on average, at least 20 usability issues. Every time we do a test I’m reminded that usability testing is a wonderful thing a wonderfully misunderstood thing. It’s a research method that enables us to de-risk design.
Done at the right time and in the right way, we can de-risk designs prior to launching them. What does this mean? It means that if we conduct usability testing of our products, things that will get in the way of users being able to use our products will be exposed. Exposing these things ahead of launch, gives us the chance to address these things in a cost-effective way. Ideally, still prior to launch, we can try to fix these things and then retest to see if our fixes “work” – this is iterative design. Iterative design is a wonderful thing too.
So why is it that we continue to have to justify, persuade, cajole, sometimes even fight, to get time in a product design process to do iterative design and testing? If you were a software developer, would you ever write code, ship it and launch it without testing it first to see if there were bugs in it? Of course not. So why do we continue to design UI’s largely based on our assumptions about how users will use them, ship these untested designs to developers to code and then launch without testing the UI to see if there are bugs in it? Not functional bugs – usage bugs.
The software team has made sure everything is functioning: when a user clicks, the action that’s supposed to take place, takes place – they’ve fixed the bugs they found ahead of launch. Usage bugs, though, are the kinds of things that can stop users in their tracks: labels that make no sense, groupings of information that don’t connect for users, task flows that don’t map to the way users work through tasks, or visuals that get in the way of understanding what to do. We’ve all encountered products that were designed with every good intention of “working”, and that are functionally sound, but that don’t “work” for users. The reasons for this are any mix of many:
This is a bit of a cop-out, depending of course on the nature of the product we’re building, because yes, most users can learn how to use a product that wasn’t designed to meet their needs as well as it might have been, but why would we make them learn it the hard way, by struggling, if we don’t “have” to? Moreover, depending on how mandatory it is for users to learn to use the product, can we afford to have them go elsewhere to find something “easier” to use? Learnability isn’t something we can measure effectively in a typical usability test so what users should learn and what they shouldn’t have to learn again becomes an opinion-based conversation. Those who would point to learnability as justification for not fixing exposed usability issues may have a point in some cases, like complex software systems designed for very specific applications. The conversation about what and how much users “should” have to learn is one that must be addressed.
Another reason why, even when we do test, we still launch products with usability issues.
So what can we do? We do what those in market research did decades ago – continue to advocate for more rigour in our research specialty – user research. We continue to justify, persuade, cajole, sometimes even fight, to get a proper product design plan in place that gives user research, design and testing the time it needs to be done well. The impact of not doing this is too severe to not try, and eventually product designers and developers will learn this, the hard way.
Comments
Related Articles
Don't miss an article! We'll notify you of each new post.