I was really pleased to be back in Bristol on July 18 for UX Bristol 2014. I’ve been a couple of times before, and I might have enjoyed this one the most. I learned a lot, and there are various topics and discussions that I want to follow up on.
Participants have a selection of four one-hour workshops to attend, and the day wraps up with a few short talks. I have sketchnotes for the things that I attended, and I’ve included those below. There are also great live blogged notes online, too… quite by coincidence, I seem to have attended the same sessions as the live blogger!
I was lucky enough to be involved in organising the UX lightning talks event for Cambridge Usability Group again this year. Towards the end of March, I sent out a call for people in the Cambridge UX community who would like to give a 5-minute talk about their work. Tips and tricks, stories from the trenches, thoughts, ideas, questions… anything. They didn’t disappoint, and on the night, to a sell-out * audience, everyone shared ideas and told stories based on their experience. Just for a bit of added excitement / headache, I gave a talk as well. We hosted it at Microsoft Research, and Red Gate Software kindly sponsored drinks afterwards at a nearby pub.
I’ve managed to pull together notes for the talks, in order of appearance…
While this might not be as strict (or serious?!) an evaluation technique as, say, heuristic analysis or usability testing, having two people act out the roles of the user and the user interface (UI) of system can, I think, be very revealing.
It’s a little bit like some of the more elaborate “paper prototyping” scenarios I’ve seen, but I first heard of this in a talk by Stephen P Anderson, at UXLx 2010. Perhaps the best thing to do is see Stephen describe it at another event in Norway.
Observing representative users carry out representative tasks using a system (for example, your data visualisation tool), and recording any issues that come up!
Ideally, there’s a participant (a user to test your system), facilitator (that might be you), and observer (let them take notes).
Dave Hamill’s usability testing tips & tricks (sketchnotes from NUX2)
As part of evaluation, critique gives you a structured way to discuss usability issues with the developer of a system. Aim to give and receive feedback in positive way. Be kind, though. You don’t need to put the recipient on the defensive (although that takes skill and practice).
Heuristic analysis provides a way of objectively evaluating your own work or that of others against a set of design principles (“heuristics”). While it is a great way to pick up issues early on, it is not a replacement for observing real people using your system. We’ll get to usability testing later on!
A nice little primer for Jakob Nielens’ 10 Usability Heuristics. This was created by João Machini and is used with permission
We knew that lots of people, both on the Genome Camps and off it, were interested but we were less sure about the extent to which that would translate into actually selling places! Turns out that we needn’t have worried.
Places were sold out in a record 29 minutes. Now, there’s a waiting list and it looks as though we could run the same event a week later and fill it. Phew.
Screengrab from jsbestpractices.com