It has been long-recognised in the realms of usability that just listening to users can be inherently risky, and that it is much more valuable to watch what they do. Yes, we can gain lots of insights from having participants talk out loud in a usability test, for example, but once we get into user preferences or asking what someone did last week… that’s not so reliable.
I’m far from being an expert in human factors and psychology, but we have probably all experienced in some way the disjoint between what people do and what they say. Ever cooked a meal for someone, and messed up something? You watch them battle with your dried-out, overcooked chicken, pushing the blackened bits to the side of the plate. But at the end, as you’re clearing up, they say, “Hey, thanks for the meal! Really… um… tasty”. There’s a big social perception aspect at play here, and people will often naturally say what they think you want to hear.
The same goes in usability tests. Someone struggles with your interface; they frown and grimace; they can’t find the species selection menu… but at the end, you ask them how the session went and what they think, and they say “Oh, I really like what you’ve done here. I’ll definitely use it!”.
“The trouble with market research is that people don’t think how they feel, they don’t say what they think and they don’t do what they say.”
The article is actually about hi-tech approaches to market research and the assessment of emotional response, but it also overlaps with peoples’ perception and experience of using a “product”… and that product could just as easily be your website or application, as it could be a new toaster or something.
What were you thinking?!
This isn’t particularly new. Hi-tech ways to try and assess the usability or effectiveness of an interface have been used for a while now. Eye-tracking is the big one here, but in the UX world, this is a controversial subject!
In addition, this is all about considering how people respond, and what their esitmation is of something. For that, we might use low-cost techniques like focus groups (again, if we like taking a risk!) or surveys to gather opinions, but even there, we have to be careful. Psychology plays a big role again, and when and how questions are asked can vastly influence the responses of participants.
Phew. It’s enough to make your head hurt! So how do we cope with it? Well, I’d say that learning from the experts; learning about the common pitfalls would stand us in good stead, but just as Jenny Cham said recently about usability testing, we need to try things out and see what works.
With a survey, for example, it is common to try it out on a small selection of participants – a pilot study (like beta testing!) – before releasing it fully, just to test common interpretations of the questions. Not quite trial and error; that’s not good enough. But certainly finding out about users’ emotions, feelings and perceptions seems like an inexact science (at least, if you have limited time and resources), and we can get better at it only by doing.
Listening to people while they use something can really help you get an insight into what they are thinking, especially because you are observing at the same time what they are doing. Things get a bit shakier when you start to ask about fuzzier things… “do you like it?”, “how would you do it”, “would you click on it if we put it over here?“. Something to bear in mind when carrying out different kinds of user research.