I was really pleased to be back in Bristol on July 18 for UX Bristol 2014. I’ve been a couple of times before, and I might have enjoyed this one the most. I learned a lot, and there are various topics and discussions that I want to follow up on.
Participants have a selection of four one-hour workshops to attend, and the day wraps up with a few short talks. I have sketchnotes for the things that I attended, and I’ve included those below. There are also great live blogged notes online, too… quite by coincidence, I seem to have attended the same sessions as the live blogger!
I was lucky enough to be involved in organising the UX lightning talks event for Cambridge Usability Group again this year. Towards the end of March, I sent out a call for people in the Cambridge UX community who would like to give a 5-minute talk about their work. Tips and tricks, stories from the trenches, thoughts, ideas, questions… anything. They didn’t disappoint, and on the night, to a sell-out * audience, everyone shared ideas and told stories based on their experience. Just for a bit of added excitement / headache, I gave a talk as well. We hosted it at Microsoft Research, and Red Gate Software kindly sponsored drinks afterwards at a nearby pub.
I’ve managed to pull together notes for the talks, in order of appearance…
While this might not be as strict (or serious?!) an evaluation technique as, say, heuristic analysis or usability testing, having two people act out the roles of the user and the user interface (UI) of system can, I think, be very revealing.
It’s a little bit like some of the more elaborate “paper prototyping” scenarios I’ve seen, but I first heard of this in a talk by Stephen P Anderson, at UXLx 2010. Perhaps the best thing to do is see Stephen describe it at another event in Norway.
Observing representative users carry out representative tasks using a system (for example, your data visualisation tool), and recording any issues that come up!
Ideally, there’s a participant (a user to test your system), facilitator (that might be you), and observer (let them take notes).
Dave Hamill’s usability testing tips & tricks (sketchnotes from NUX2)
As part of evaluation, critique gives you a structured way to discuss usability issues with the developer of a system. Aim to give and receive feedback in positive way. Be kind, though. You don’t need to put the recipient on the defensive (although that takes skill and practice).
Heuristic analysis provides a way of objectively evaluating your own work or that of others against a set of design principles (“heuristics”). While it is a great way to pick up issues early on, it is not a replacement for observing real people using your system. We’ll get to usability testing later on!
A nice little primer for Jakob Nielens’ 10 Usability Heuristics. This was created by João Machini and is used with permission
We knew that lots of people, both on the Genome Camps and off it, were interested but we were less sure about the extent to which that would translate into actually selling places! Turns out that we needn’t have worried.
Places were sold out in a record 29 minutes. Now, there’s a waiting list and it looks as though we could run the same event a week later and fill it. Phew.
Screengrab from jsbestpractices.com
Since the summer of 2013, I’ve been working with my colleague, Rafael Jimenez, to take this from an idea to reality. Though we hope to have lots of interest from staff at both the EBI and the Sanger Institute, the event is open to anyone.
Inside the European Bioinformatics Institute
While I am keen to make it understood that UX design is not UI design, now and then, I am asked to help design a user interface. During the summer, I was pulled onto a project last minute: “Can you help us with this presenter interface?”. It turned out to be an interesting little exercise, and the end result seems to be pretty good.
Start screen from the control panel
To maintain your sanity, and as a tool for communicating the goals of user research, a plan is essential.
I’m not a fan of lots of documentation in a project and anyway, exhaustive development specifications and milestone reports are usually not part of the projects I work on at EMBL-EBI. Even so, for a given piece of user research, I want to have a plan, and the single-page kind that Tomer Sharon recommends is perhaps my favourite way of doing this.
A friend recently asked for an example of one of these single-page user research plans, so here’s one I drafted recently. I’ve anonymised it slightly, so it might seem a little bit vague!
An anonymised example of a single-page user research plan
Since the idea of considering the user experience of using EMBL-EBI bioinformatics resources took root about four years ago, we’ve been able to build on past successes and peoples’ trust, and expand that kind of UX design-related work we can get done.
More recently, we’ve begun to get more traction for carrying out research of users and their habits early in a project, to give us a solid foundation for design and development work. This is brilliant, since it brings us closer to the community and learn from their stories, but just like scientific research, user research needs to be planned. To have a record of this, I’ve adapted Tomer’s format very slightly, and now I have something that I can use myself, and that I can share with team-mates, project leaders, and other stakeholders.