Posted on: March 12, 2015
The Measurement of UX Success
“Everything should be 3-clicks away”
The infamous statement that I hear far too often during a product or website redesign. It seems that click counts have become the customary metric associated with quantifying a good user experience (at least in the minds of the uninitiated), but why? The idea that users will get frustrated if they have to click more than three times to find a piece of content on your website has been around for quite some time. It’s an idea that has intuitive appeal. Why make users click more than they have to? More clicks usually means more screens. More screens usually means spending more time completing tasks. More time spent on tasks usually means higher task failure and a poorer user experience.
Logically, it makes sense: users will be frustrated if they spend a lot of time clicking around to find what they need. However, in reality researchers found the Three-Click Rule to have little scientific basis. In most situations the number of clicks is irrelevant; what is really important is that visitors always know where they are, where they were and where they can go next. Even 10 clicks are OK if users feel that they have a full understanding of how the system works.
So if the number of clicks isn’t a good measure of a successful user experience, then what is? There are many different metrics that can be leveraged, some more practical than others. I’ve outlined 5 proven usability metrics below, each of these can prove helpful in quantifying the usability of your product or website.
1. Task Success Rate
Task success rate is the percentage of correctly completed tasks by users. This is probably the most commonly used performance metric that reflects how effectively users are able to complete certain tasks. As long as the task has a clearly defined goal or end point, such as completing a registration form, or buying a certain product, we can measure the success rate.
2. User Errors
Generally errors are a useful way of evaluating user performance. Errors can tell you how many mistakes were made, where they were made, how various designs produce different frequencies and types of errors, and overall how usable something really is.
Error rate can be calculated in a few different ways depending on the number of error opportunities in a task. Error opportunities represent any user interaction where there’s a chance of errors occurring. For example, a web form has as many error opportunities as there are fields on the page. The typical calculation is as follows: Total # of occurred errors / Total number of error opportunities.
3. First Clicks
If a user clicks down the wrong path, less than half eventually complete the task successfully. If the first click is down the right path, 87% succeed.
If it’s not useful then it won’t get used and there will be no user experience to measure. The Technology Acceptance Model (TAM) is a questionnaire for measuring usefulness.
5. Back Button Usage
Do you know how much the “back” button is being used on your website and when? If your clients are pressing it multiple times at places where it doesn’t make sense then chances are the architecture of your website is broken.
Of course usage of the back button is perfectly normal in most cases, but if the analytics point to heavy usage along with an absence of completed transactions then you need to investigate further.
Measuring UX metrics is great – if you are measuring the right thing. Be cognizant of what these metrics actually mean and you’ll make huge strides in understanding the experiences of your users.
Seneca Brandi is an Experience Architect at Akendi, a firm dedicated to creating intentional experiences through end-to-end experience design. To learn more about Akendi, member research, or user experience design, visit www.akendi.com or email email@example.com.
Seneca Brandi brings over 6 years of experience in the fields of user experience research, interaction design, and usability testing to his current role at Akendi. With a Masters degree in Human Computer Interaction, he is an advocate for user research and data driven decision-making. His research experience includes both qualitative and quantitative methodologies ranging from ethnographical field research to controlled laboratory observations and testing. Seneca’s experience with a diverse set of enterprise level companies has given him extensive exposure to both large and small projects for a variety of clients, including public sector and private sector organizations such as the Department of Fisheries and Oceans, CATSA, the Canadian Real Estate Association, Home Hardware, Algonquin College, the Royal Ontario Museum, Atlantic Lottery, and BlackBerry.
Sign up for our UX Blog
Don't miss an article! We'll notify you of each new post.