UX BLOG
Get to Know Akendi Close
< Back
Blog Image
Cindy Beggs
Cindy Beggs

Akendi Alumnus

Make it “easy to use” for “tech savvy” people

Have any of you heard these phrases from clients or stakeholders?  We hear these often; not always at the same time, but definitely on their own.  Along with, make it “intuitive”, they come up in conversations around user requirements and who we should be recruiting for testing or in user research and design conversations.  And they’re stated as though that’s enough said; they epitomize assumed understanding in the UX field of what intuitive, easy to use designs and tech-savvy people look like.

However, “easy to use” isn’t measurable and “tech savvy” to me might mean “tech novice” to you unless we define these phrases more clearly.  So how do we do that?  We’ll start with “easy to use” in this post and tackle “tech savvy” in another.

Creating Metrics for “Easy to Use”

Usability research provides us with 3 characteristics from which we can extract metrics to describe ease of use.  These are: effectiveness, efficiency and satisfaction, (take a look at the ISO for usability:  ISO 9241-11: Ergonomics of human-system interaction, Part 11, if you haven’t before).  Gathering measures of effectiveness, efficiency and satisfaction through usability testing allows us to really define what we mean when we say “make it easy to use”.  Two of these characteristics - effectiveness and efficiency - are also indisputable.

They are measured through users’ behaviour:  could the user complete the task, yes or no; and time, error rates, or optimal path:  how long did it take the user to do this, how many mistakes did they make along the way, what path(s) did they follow.  There is no room for interpretation here as long as we’re collecting this data in an unbiased, systematic way.

The usability characteristic of satisfaction is gathered from users’ self-reporting how easy they thought each task was to complete; this is a measure of users’ perception.  It’s one of the most interesting characteristics as well if you get invigorated by the complexity of human psychology.  Some users, who did not complete a given task successfully, who also made all kinds of errors in trying to complete it, and who spent a great deal longer than they should in trying to complete the task, may give a high satisfaction rating indicating that they thought the task was very easy to complete. Other users who successfully completed the task, did so quickly and without error, may give a very poor satisfaction rating, indicating they did not think the task was easy to complete.

Often there is a positive correlation between effectiveness, (success on task), and satisfaction, but one doesn’t cause the other.  Users are complex:  one might have a beef about the colours or the fonts or the company whose designs they are testing, therefore, rating it poorly no matter how “easy to use” it was.  Others may take quite personally that the satisfaction rating reflects their own performance on the task and rate it highly, dismissing or even being unaware that they didn’t complete the task successfully.  There are as many reasons why users rate satisfaction the way they do as there are users:  it’s a measure of perception.

Where do Metrics for Ease of Use Come in?

The organization’s stakeholders need to determine these metrics for themselves.  Many have yet to establish usability metrics and some rely heavily on the SUS (System Usability Scale, another one to review if you’ve not heard of it), score to gauge how easy their designs are for people to use; overlooking, or perhaps also being unaware, that SUS provides a measure of how easy their designs are perceived to be through users’ self-reporting; not how successfully or efficiently users completed their tasks.  Ideally, and in a mature usability practice, organizations would set metrics for how they are going to define “easy to use” based on effectiveness and efficiency.

For example:  if 85% of our users can successfully complete our key tasks, (which would also be defined), without error and within 20 seconds, we can say our site is “easy to use”.  If we do not achieve these metrics, gathered through a benchmark, summative usability test, we will not deem our site is “easy to use” and will continue to iterate the designs and re-test until it is. Without setting explicit, indisputable metrics that define what we mean by “easy to use” or “intuitive”, we continue to perpetuate design by opinion and ease of use based on assumption.

Setting metrics really is easy to do; making our designs easy to use…well, that requires more than opinion and assumptions.
Cindy Beggs
Cindy Beggs

Akendi Alumnus

Share this article

Linked InFacebookTwitter

Comments

Time limit is exhausted. Please reload CAPTCHA.

Learn how your comment data is used by viewing Akendi's Blog Privacy Policy

Related Articles

Sign up for our UX Blog

Don't miss an article! We'll notify you of each new post.

How can we help_hand help you?