OK, so you have done some usability testing, and your participants were happy with some features, but struggled with others. So now what?!
You need to identify where the problems lie, review the list (preferably as a team), and then prioritise the list into something the developers can start to tackle. You might not be able to solve everything (i.e. before the next round of testing), but that’s OK. You will still make a difference to the usability of your site.
Part 1: Usability Testing is Easy
Part 2: Designing tasks
Identification and interpretation
Identifying and interpreting the problems that are shown up during usability testing can sometimes be challenging. In the realm of web-based applications and tools of the type produced by the EBI, it is really a function of the main reviewer’s familiarity with the system being tested, and the project as a whole.
If the person who has run your usability tests is also one of your developers, then life might be a bit easier. As Dado Marcora remarked during his part of the Usability Testing is Easy presentation, when the developer witnesses usability problems appear, he or she will inevitably already start sorting them, thinking of possible solutions, and so on.
If possible, it is a great idea to have the rest of the development team watch (or even sit in on) some of the usability tests. This is where recording sessions with something like Silverback can be really useful, so that you can show them after the event.
Give each developer a recording sheet, and ask them to note the three main problems that leap out at them. Since they already have a strong mental model of how the system works, they will be the first to notice when a user diverts from this.
Collect these, and build them into a list.
All problems are not born equal
I recently got a copy of Steve Krug’s excellent new book, Rocket Surgery Made Easy (*). There’s a great quote that applies to this “debriefing” process, as he calls it, and picking out and prioritising problems:
Determining severity is always a judgment call. Problems that are going to cause a lot of people a lot of trouble are no-brainers. The toughest decisions involve corner cases (very damaging problems that affect only a few users) and ubiquitous nuisances (things that affect a lot of people but are really only minor annoyances).
Steve Krug, “Rocket Surgery Made Easy” (2009), New Riders – ISBN 0321657292
Quite so. Given the goals of your users and your stakeholders (e.g. which data do you wish to make available?), you should be able to assess those critical problems that need to be dealt with at high priority. Then you can sort through the rest.
This is probably best done with all your team involved, so that you get a range of perspectives. Fairly simply, people can vote for where to put problems on a priority list. Get some big pieces of paper, post-it notes and little sticky dots. It might look something like this:
If you are doing regular usability testing (Steve Krug recommends a morning a month), then assess how many problems you can fix before the next round of testing. This should help you divide up your available resources. Something that Mr Krug suggests is to do the least you can do when fixing these problems… so shave your application with Occam’s razor. :)