Posted on: 11 September 2014


Seneca Brandi
Akendi Alumnus
Bias in the UX Lab
Bias is an inclination. Most Biases – like preferring the smell of perfume instead of garbage, or assuming someone who’s shivering is cold – are helpful. But in the realm of UX Research, bias is like an evil villain lurking in the background, threatening to compromise research findings by any means possible.
Bias in research is everywhere; sometimes it can be blatant, like asking an Apple fanboy to evaluate the latest iPhone. Other times bias can be very subtle, for example: I once participated in a study where researchers asked participants if they thought $50, $20, $10 or $5 was a fair amount to pay for a particular product. Because the numbers listed first were the higher amounts, participants were more likely to choose a higher number. If researchers had listed the prices in the opposite order, participants would probably indicate that a lower amount was a fair price.
(The above psychological effect is called “Anchoring,” a form of bias that reflects our tendency to rely too heavily on a past reference.)
As a UX researcher at Akendi, I’ve observed participants exhibiting all kinds of interesting behaviours, and in particular I’ve noticed specific recurring biases.
The Most Common Biases to Avoid when Testing
Social Desirability
Users generally tell you what they think you want to hear and are less likely to say disparaging things about people and products. This often means users blaming themselves (not the product) if they have a problem.
Task-Selection Bias
“If you’ve asked me to do it, it must be able to be done.” I’ve never given a task to a participant that was not possible to complete – it’s potentially unethical and probably doesn’t even make sense. But the user knows this too, making for an unrealistic scenario.
Hawthorne effect
Participants want to appear more skilled than they are. I’ve seen participants struggle on tasks and then follow up by telling me the task wasn’t that hard. These responses are understandable; participants are being observed in a usability session and they may want their audience to see them as competent.
Positive Illusions
People are bad at predicting their behaviour. When asked about future behaviour, study participants often self-reflect with a degree of bias that skews positively. This represents a common bias where people identify how they envision themselves acting rather than how they will actually act.
Fatigue
Participants may suffer from survey burnout or response fatigue, where they rush through responses because they’re tired of yet another survey.
I’ve only listed 5 types of bias, however there are many more that have been identified in cognitive psychology & behavioural economics. The important thing to remember is that many of these can be avoided by simply observing participant behaviour. If biases cannot be avoided, it is important to be aware of their presence and communicate their potential impact on decisions. Becoming familiar with cognitive biases – whether you’re drafting interview scripts, responding to participant questions, or analyzing findings – will make your research more robust, leading to an increased confidence in your design decisions.

Seneca Brandi
Akendi Alumnus
Comments
Related Articles