A matrix of tactics for getting usability issues fixed
On July 20th, I had the pleasure of giving a workshop at UX Bristol 2012 alongside Caroline Jarrett. We promised that we would share all the great ideas and recommendations that our participants generated. These were tactics for how to make sure that the usability ussues you find actually get fixed.
A big thank you to Steve Krug for allowing us to build on all the work he did with Caroline on this topic, and for letting us reuse his slides in our presentation.
The following is a matrix of those “lightbulb tactics“, with the four main usability testing phases we considered on one axis, and some apparent themes along the other. I produced those themes by doing a quick bit of affinity mapping of all the tactics… hopefully, they make sense!
30 lightbulb tactics for getting things done
Not all of these will be relevant for everyone but it would be great to know what works for you, and if you have any tactics you’d like to share. Here they are, grouped by usability testing stage:
Stage 1: Preparation
1. Be clear about who signs off on what and have them involved from the start
2. Involve actual decision-maker in process early on and make sure they’re “on board”
3. Involve the technical team early (and make it their product)
4. Allocate time to get stuff done and respect it
5. Build in slack – false deadlines to give flexibility for last minute changes
6. Schedule in some “maintenance time” for developers to fix stuff
7. Stay focused and prioritise what to fix
Stage 2: Testing
8. Encourage developers to observe testing sessions
9. Develop & fix while observing people finding usability issues (this reminded me of the RITE method)
Stage 3: Reporting
10. Use their language and not UX-specific terms
11. Take the opportunity to change the decision-maker’s opinion
12. Report issues in a bug tracking system and make sure they are assigned to someone
13. Show, don’t tell
14. Don’t hang your expectations of getting things done on one big, final report
15. Don’t present usability issues to be fixed as “nice-to-haves”
16. Don’t focus on deliverables (focus on action)
17. Describe the usability issues as clearly as possible
18. Reinforce findings from usability testing with quantitative data (e.g. from A/B testing)
Stage 4: Fixing
19. Make the decision-maker the champion of UX decisions
20.Challenge the technical team who say “it can’t be done”. Why not?
21. What can be done? Is it really all or nothing?
22. Is this the right developer to fix these issues? Do you need a more senior or junior one?
23. Try to re-shuffle the technical team, to get fresh eyes on an issue
24. Use A/B testing to monitor the results of small changes, and use successes to support work on bigger issues
25. Share the load – make yourself the decision-maker for some of the small issues
26. Make sure that “small” issues are assigned to someone
27. f you’re following Agile and continuous release cycles, try to build usability issue fixes into sprints
28. Help to break down big issues into smaller, more manageable issues – be involved
29. Help people to prioritise what needs to be fixed
30. Figure out impact VS effort for each issue (e.g. using an impact/effort matrix)
Practical workshops & informative talks at UX Bristol
UX Bristol is a great one-day conference, and although it is rounded off with a series of short (5-minute) talks, the focus is really on practical, hands-on workshops to allow UX professionals to share knowledge and experience. This was certainly what we aimed to do.
The workshop that Caroline and I gave was called “Why do usability problems go unfixed?“. We had an hour to encourage our enthusiastic participants to generate and share tactics for getting usability issues fixed in the face of resource issues, politics, resistance, inertia, etc. This workshop was a follow-up to one that Caroline ran with Steve Krug at the UPA conference in Las Vegas earlier this year, and we promised to share what our group generated.
Yes, there was a presentation
We broke the workshop up into four main phases:
- gathering some examples;
- delving into why;
- working on tactical solutions;
- sharing & reporting.
Before that, though, we set the scene with a short talk about why usability issues go unfixed and what Caroline and Steve found when they ran a survey on the topic. Here are all the slides we used for this workshop:
Everyone worked on tactical solutions
Because we didn’t have much time, we wanted to get into the hands-on part of the session as soon as we could. We suggested that people think about usability testing in four phases (prepare, test, report, act) and then for any of those phases, generate or share tactics for dealing with any of the obstacles we might face – either ones that Caroline and I had presented, or ones from their experience.
Using the 3-12-3 game structure, participants had three minutes to individually note down as many tactics as possible; 12 minutes to discuss, sort, combine and organise these as a group; then three minutes to present their tactics back to the room.
Some more resources to help you
There is obviously an awful lot out there covering usability testing and how to weave UX design work into your projects. If you want to explore the area some more, or follow up on some of the references that Caroline and I made, these resources should help you:
If you’re considering the “UX maturity” of where you’re working, you should read Renato Feijó article on planning your UX Strategy. On a similar topic, see the video of Michele Ide-Smith’s presentation from UX Cambridge 2011 on embedding UX in large organisations, as well as her slides for that talk.
On usability testing and evaluation methods and techniques, look at books like Carol Barnum’s “Usability Testing Essentials“, Caroline Jarrett et al‘s book “User Interface Design & Evaluation“, and Steve Krug’s “Rocket Surgery Made Easy“.