This article first appeared in the April 2012 issue (#226) of .net magazine – the world's best-selling magazine for web designers and developers.
.net: What’s quantitative usability testing?
RM: As the usability industry evolves, we’re finding that many managers insist upon quantitative answers to usability questions as well as the traditional qualitative information. They want to see proof that usability is improving year on year, so they can demonstrate to their managers that the money they put into UX is worthwhile.
It’s possible to make good quantitative usability measurements, but it costs a lot more than traditional testing and you have to be very careful about how it’s carried out. Qualitative measuring is a friendly method in that you can make mistakes with it and still get good results. Quantitative measuring is a much more brittle method – it only gives good results if you are disciplined and follow the methods rigorously.
.net: What mistakes do usability testers make?
RM: A big one is incorrect handling of numbers. For example, every measurement has a degree of uncertainty associated with it and that should be included in the results. That’s non-trivial – the uncertainty is part of the truth, but many practitioners don’t include it.
I have conducted a number of so-called Comparative Usability Evaluation studies where we took a large number of teams and had them carry out the same quantitative study on a website. Many of the teams used the correct methodology and arrived at similar results, but some of the teams arrived at results that were so far from each other that their uncertainty intervals didn’t overlap. So some of the studies were simply wrong. We investigated what went wrong in these studies, and we found the problems were mostly around poor recruitment of people to test the website and incorrect handling of measurements.
A key finding in all of this is that those teams whose studies were fundamentally flawed were unaware of it – and these were people who were being paid to teach or practise usability. That worries me a bit, because these people didn’t understand their own limitations and I didn’t see any sense of caution about promoting their results. This is a problem in the community generally; I rarely see any discussion of usability testing mistakes. It’s a mark of maturity in a profession when mistakes are seen as an asset that can be used to improve future performance.
.net: What factors contribute to this culture?
RM: Our profession is still young, and many people see what they’re doing as an art as opposed to an industrial process. We’ve been doing usability testing reasonably systematically now for about 25 years – it’s not an art any more, it should be an industrial process we can measure, standardise and in which we can certify people.
But a lot of usability professionals don’t like that view, because they really value the freedom they feel they have in applying design rules and making interesting little twists to usability testing. Sometimes these adaptations are for the better, but in most cases they are not. I have written a checklist that sets out the essential qualities of a good usability test, and I think something like that should be part of the contract when a company commissions usability testing.
.net: So you think there should be accreditation for usability testing?
RM: Yes, very strongly, because there are too many poor practitioners out there. There are efforts underway in Europe led by the German Usability Professionals Association to develop accreditation at the basic level, and at every opportunity I push for it to be done at the advanced level also.
.net: What are the biggest UX mistakes you still see on websites?
RM: The number one mistake is badly phrased error messages – either nothing happens when an error is made, or the message is incomprehensible because it’s written in technical language. After that, failing to make options visible to the user.
.net: What is the very first thing that should be considered when designing a site or solving a problem?
RM: The most important thing is to get the tasks right: work out what it is that users want to do on a site and make it highly visible.