Posted on: 1 September 2016
Putting User Testing in Context
The traditional role of user testing in the product lifecycle has changed dramatically over the last 2-3 years.
User testing is one of the main tools of UX professionals to ensure that product design and development meets the users’ needs. A summative user test towards the end of the lifecycle is traditionally used as the final check; the measurement of the degree to which it meets the product design goals; its effectiveness at completing the task, and how efficiently it performs, and, crucially, the satisfaction, or how well it is received by end users.
A measure of the various aspects of use would tell the design team and management the likelihood of the product being successful.
As an example of the traditional approach, consider two teams who have been given a task of designing a new digital service or app. The two teams generally follow good UX practice and, prior to launch, perform a large user test, producing informative usage data and measures against success criteria.
Team ‘A’ test their app and find that 25% of users really like the app and would make it their preferred choice, but 75% of users hate the design, rate it as the second worst app they have seen, and would never want to use it again.
Team ‘B’ test their design and the results indicate that 90% of user rate the product as very good, rating it as one of the best apps that they have experienced.
Actually forget that, for the sake of illustration, let’s get extreme… Team ‘B’ find that 98% of users rate the app as excellent, in fact they say it is the second best app, of any type, they have ever seen.
Traditionally these results show that the Team ‘B’ designers are UX gods and management would be very pleased that they had the foresight to manage a good design team. The product will be hailed as an overriding success. There would be much ‘back slapping’ and opening of champagne. Conversely Team A would be skulking in the corner checking out the job market. This has changed.
Due to the evolving nature of digital products, the speed of development has drastically accelerated and we must now introduce some context into the process; specifically, the context of business strategy and competitive environment.
Keeping the same example above, consider the outcome when I add the context that the app is a taxi ordering service, and that, on investigation, the ‘best app’ voiced by those in the Team ‘B’ is also a taxi ordering app. We are now in the situation where the results of the test are the same, but the reality is that Team ‘B’ has an excellently rated app, at the top of the game, but that it will likely result in no sales at all (as the app was the ‘second best app’, but the best app also happens to be a taxi booking app – which users will continue to use). Conversely Team A have the likelihood of initially capturing 25% of the market. This is huge, and, depending on the business strategy, likely to be a very successful outcome. The ‘back slapping’ and champagne now swaps to the other team.
I realize that these examples are extreme, and that the actual user testing within product design has not changed, but the reporting context has. It is now essential that UX, and user testing, is involved from the very start of any product design.
For Managers looking to involve UX in the process they must consider the early insertion of user centred design, and must not think that UX practitioners are overstepping their remit by asking about strategy and competition; but rather that they should be concerned if they are not.
UX practitioners must ensure that they are aware of the business strategy and competitive analysis, and make sure that these aspects of the design are properly investigated as part of any user testing.