Good design depends on whether or not users “get it” and the only way to evaluate if you successfully communicated how the design works is by measuring the product usability.
The ISO 9241-11 standard evaluates product usability along three criteria:
- Effectiveness – sometimes called utility, usefulness, regardless of the label the basic question to answer is “does the system support the user in achieving their goals?”
- Efficiency – after have determined that the user can achieve their goals, you might to ensure that goals can be completed as quickly as possible, with a few errors as possible
- Satisfaction – finally, a usable system should also be a positive experience
The method for measuring usability has a straightforward name, a usability test. A usability test will help you determine if your user interfaces are “easy to use”, and if not, provide you insights into why users struggled.
There are 4 basic steps that are part of a usability test:
- Participant recruiting – in most cases you don’t need a lot of participants, 5 is usually fine for usability testing, but what is more important is that you have to test it with actual users. Having someone from a different department is not a reliable study, so make sure you are recruiting the right users for your test by taking the time to create a recruitment screener.
- Protocol development – a usability test is an experiment, so you will need to create a test plan and script to ensure that every participant is given the same instructions, and level of support, otherwise it will be difficult to draw conclusions.
- Data collection – in a usability test, while participants are completing the tasks from the protocol, you will need to capture their performance for analysis. The amount and types of data you collect will be determined by the objectives of the testing. Performance data (for example; time on task, error rates) will determine if the design meets business requirements, while screen and audio capture provide richer insights on user behaviour. If a participant indicates that they didn’t like something with the design, probe to try to find out why.
- Data analysis – time to crunch the data and see what insights fall out from the usability test. Unless you ran a large number of participants don’t try to run of the more complex experimental statistical methods. Instead, look for trends, errors or behaviours demonstrated by 3 or more participants.
At the end of the data analysis you will have evidence to back up your design decisions, a list of issues and concerns to tackle for further product improvements and then it’s time to start getting ready for the next round of testing. From recruiting to analysis, the testing process could take as little as 2 weeks, which is not very large investment of time or money, but can have a significant impact on your chances for success.
Daniel Iaboni , is Lead User Experience Specialist at Akendi, a firm dedicated to creating intentional experiences through end-to-end experience design. To learn more about Akendi or user experience design, visit www.akendi.com.
Akendi is a product strategy, user experience design and usability research firm. We are passionate about the creation of intentional experiences – whether those involve digital products, physical products, mobile, service or bricks-and-mortar interactions. We work shoulder-to-shoulder to optimize the experiences you deliver. Akendi Corporate Overview (PDF).
Experience Thinking innovation firm in Product UX Strategy, User Experience Design & Usability Testing for Companies: Toronto, Ottawa, Montreal, Vancouver, Canada.
T: +44 (0)20 35982601
22 Highbury Grove
London, N5 2EF