Seeing is not always believing.

Seeing is not always believing.

We have 5 senses (some people claim they have six) and you cannot trust any of them. Cognitive psychologists studying exteroception try to make sense of how we make sense of our senses. The results of some of the experiments conducted are quite remarkable. Give people a perfectly round coin to hold and distort their vision with a lense that makes the coin look oval and they will tell you it’s oval even if they can feel that it is not. Older phone lines only transmitted audio up to 2180Hz but the brain compensated for this and made up the rest based on previous experience. You hear what is not there which is a neat trick that some audio (de)coders such as the one in Philips’ DCC player exploited. The list of these, should we call them misperceptions, goes on and on.

If we see what is not there and do not see what actually might be there, then what consequences does this have for doing UX research? I found this out first-hand when developing a system for visually impaired users. The concept tested was an auditory display, (a square with edges) on which a mouse could be moved to find and manipulate auditory objects. To test whether this worked in principle, I conducted an experiment where participants had to find a playing card and drag it to the trashcan just like a windows user would manipulate objects with a mouse on a GUI. Simple tasks, simple set-ups, and 2 groups of participants.

First to take part where the visually impaired who seemed to sail through this. Nice! Second were the sighted participants who were blindfolded during the test. At this stage it is probably relevant to say that these were not just blindfolded participants, they were blindfolded colleagues who also happened to be Dutch. They didn’t really do so well and seemed to take quite a bit longer. That’s also what you would expect? Working without the use of your eyes for the first time is quite tedious and visually impaired people should be quicker.

However, as some of you know, I am a UX person with a computer science background and the scientist in me likes numbers. So I measured everything automatically and applied some solid statistics to the results. The outcome? No significant difference in efficiency (task completion time) nor effectiveness (ability to complete tasks). Weird, I could have sworn…… Time to reflect on what really happened during those sessions. The visually impaired participants were ‘out on a jolly’ to help somebody who was helping them which made the tests a pleasant experience for all (including the water slobbering guide dogs). My blindfolded colleagues on the other hand found out how awkward it is to do things in the dark and, being Dutch and all, didn’t hold back in telling me so. This left me hoping that it would all be over soon, literally changing my perception of time.

This experience is not unique and should be a warning to those conducting or observing usability tests to not instinctively act on what you think you have just observed. The Dutch saying ‘Meten is weten’ roughly translates to ‘to measure is to know’ which, although it doesn’t sound as good as the Dutch version, is true indeed. Any good usability test should aim to objectively measure effectiveness and efficiency to support or reject the issues observed. Just because you think something is happening, it doesn’t mean it is and even if it is then it’d be good to know to what extent, so an informed decision can be made about whether it is worth fixing or not​.

Dr. Leo Poll is President of Akendi UK.  A firm dedicated to creating intentional experiences through end-to-end experience design, to learn more about Akendi or user experience design, visit

Leave a reply

Time limit is exhausted. Please reload CAPTCHA.