We at Akendi are increasingly asked to conduct usability tests with digital products in the later stages of product development. This means we come in to test at the stage where the wireframes are already worked out and the client is working on the coding and visual / brand design of the digital product.
So, we create our testing protocol, recruit end users and conduct the usability test. So far so good. Once we present the results however, additional questions come up. Questions like: so can you show me what the redesigned page will look like now? Half of the users didn’t complete their task, was that because of the software testing environment? We wondered if users thought this was the actual product that they were looking for? Did the users get a good sense of the value of this digital product? Did they miss anything? Relevant questions, no doubt, but how does this relate to doing a usability test as a research technique?
Ok, let’s step back for a minute. Usability testing, like any other UCD/HCI/UX technique, has a specific approach with specific – high value – outcomes. This technique has two terms in it: ‘usability’ and ‘testing’.
Let’s look at the first term: we’re testing the usability of a digital product (or other user experience). Usability has a well-established definition as well. It includes effectiveness (can I do with the product what I wanted to do, can I find the information), efficiency (how long does that take, how much effort do I have to put in to make it work) satisfaction (was the experience positive, engaging) and learnability (how long is reasonable for me to learn how to use this).
Let’s look at the second term: we’re testing. That means that the product is being examined, assessed, measured, and verified, to determine, objectively, whether or not it meets certain defined criteria. The outcome is a pass or fail, just like with a school test, a product safety test and a software quality assurance test. Testing can, and should, happen early. With a prototype of the product, on paper or in mock-ups or simple code / html. Testing is often done at specific times in the development process, for usability testing it is preferably done when there is an early interaction design, but there are additional times to test later in the development process as well.
So, when we’re doing usability testing that is what we’re testing: the usability, how easy is it to use this product. So what aren’t we testing? This usability testing thing doesn’t validate the product value. This is because of the low number of participants in a test project – typically anywhere between 8-18 people. This is also because of the context of the test setting: the setting doesn’t replicate all the different contexts a user would encounter in ordinary life. We’re asking each person to perform a task, not because they want to or based on a need they have identified, but because it’s part of the test. Even though you do get some anecdotal feedback that is valuable during the test sessions, these anecdotes are simply no basis for firm statements on the customer / user value of your product.
Usability testing is diagnostic, not prescriptive: it clearly tells you where the usability issues are but it doesn’t give you clear, accurate design solutions. Users are quite capable of telling you something doesn’t work according to their needs, expectations and logic, but after that it turns out to be a lot harder to come up with reasonable solutions for the less than ideal interaction they just had. You can’t expect a participant, who has just done a usability test, to switch quickly from analysis to good design solutions without knowing the project / product boundaries and design rationale that led to this user experience. Unfortunately, what we see too often in presenting usability findings are knee jerk reactions / quick fixes that seem the most obvious solution at first glance. In reality, product designers need time to assess usability results in the full context of the project / product boundaries. They are the people who are much better positioned to come up with reasonable fixes to experience flaws.
What usability testing does do is identify what is likely to go wrong with your user experience flows, functionality, page layouts and where the user needs are not met or understood. It is truly the software Quality Assurance of Experience Design, making sure we maximize the potential of a product or service experience and not let some interactions – i.e. mechanisms of reaching value – get in the way.
Tedde van Gelderen is President at Akendi, a firm dedicated to creating intentional experiences through end-to-end experience design. To learn more about Akendi, visit www.akendi.com.
Akendi is a product strategy, user experience design and usability research firm. We are passionate about the creation of intentional experiences – whether those involve digital products, physical products, mobile, service or bricks-and-mortar interactions. We work shoulder-to-shoulder to optimize the experiences you deliver. Akendi Corporate Overview (PDF).
Experience Thinking innovation firm in Product UX Strategy, User Experience Design & Usability Testing for Companies: Toronto, Ottawa, Montreal, Vancouver, Canada.
T: +44 (0)20 35982601
22 Highbury Grove
London, N5 2EF