Blog Image
Cindy Beggs

Cindy Beggs

Akendi Alumnus

Beware: Concept Testing that thinks it’s Usability Testing

When we talk about user research, we’re including the narrower activity of usability testing.  Usability testing is, like all user research, qualitatively different from customer research

The heart of it is this: customer research answers the very basic question:  why will I buy; or, for those in not for profit sector, why will I engage or use your services/product?  User research answers the very basic question:  how do I use?  The former is perception- based; the latter, behaviour-based. 

So, what do we do when we have 2 or 3 or 4 design concepts, or dressed up wireframes, that differ in terms of how the content is laid out and the interaction design patterns used to support that content?  The answer to that depends on the questions we want answered about those different concepts.  If we want to know, “which do you prefer” the subsequent question to why will I use, we’re doing concept testing.  If we want to know, “which is easier to use”, the subsequent question to how do I use, we need to do usability testing.

What is Usability Testing?

Usability testing is a behaviour-based, observational, systematic method of learning the answer to “how do I use”; it is in fact a social science experiment.  Usability testing will tell you how easy something is to use without you having to ask many questions at all.  But the answer to that question: “which is easier to use”, will require more than a few concepts’ screens; it requires something closer to, in today’s speak, an MVP, which can mean anything from a set of wireframes built out, screen by screen, to support a task flow, to a fully functioning product/website/app/mobile site. 

If you’re really unsure as to whether or not a different interaction pattern would make task fulfillment “easier”, then you build out screens to fulfill those tasks using that different pattern, and again, test the full flow with users.  That kind of testing will answer the question about which design pattern is easier for users to use.

When to use Concept Testing?

If, however, you’ve got 2 or 3 or 4 different design concepts and you want an answer to the question, “which do you prefer” – a dropdown versus thumbnails, versus a mega menu, etc.; that answer can be found through traditional customer or market research methods:  focus groups or surveys or concept testing.  That kind of research only needs a few screens that show the same content rendered differently through different design patterns.  You’ll have brought these concepts to a focus group to get feedback on look and feel and preference, and then you can winnow down your designs to one and build it out to support the 10-12 key usage scenarios the product/website/app/mobile site, needs to support. 

Be careful though:  in that kind of testing I just described – concept testing – you’re not going to get a valid answer about which concept is easier to use, even though you’ll collect a lot of opinion about which one looks easier to use.  Users aren’t designers. Getting their input into preferences about interaction is fraught with risk, unless we simply take that feedback as input and then apply our own skills as interaction designers to come up with a pattern that best suits the nature of the content, the types of tasks we’re designing to support and the context of use. 

It’s worth repeating:  users aren’t designers.  We use them to gather feedback, no doubt, but how and when, is key. Don’t ask users to tell you which interaction design pattern you should use to make something easier to use, unless you are conducting real usability testing.

Cindy Beggs

Cindy Beggs

Akendi Alumnus


Cindy I like this but disagree with some of the details. Unfortunately there is no predefinition of the design stages and words like research, testing, experiments and concepts are remarkably slippery. I taught Ergonomics/Human Factors (earlier names for UX for me) to Industrial Design undergraduates and, for my part distinguished
Research (predesign – what the industry and users know already)
Analysis (careful examination of tasks, users, use scenarios etc)
Design (concept creation, concept selection, detailing etc.)
Experiments (formal scientific comparisons e.g. of technology choices)
Testing (evaluating designs through user performance including usability)
Delivery (final polishing, market driven approaches, informing users etc.

Hope these might help to create a discussion even though I do not expect them to be fully acceptable to all.

Time limit is exhausted. Please reload CAPTCHA.

Learn how your comment data is used by viewing Akendi's Blog Privacy Policy

Related Articles

About Akendi

Akendi is a human experience design firm, leveraging equal parts experience research and creative design excellence. We provide strategic insights and analysis about customer and user behaviour and combine this knowledge with inspired design. The results enable organizations to improve effectiveness, engage users and provide remarkable customer experiences to their audiences.