The evening was organised by Mark Dalgarno of Software Acumen, and was one of the events he puts on as part of the Software East group. It was held at the offices of Red Gate Software, in Cambridge, and there were around 100 attending.
The talks included:
- 18:30 – 19:00 Michele Ide-Smith (Cambridgeshire County Council) Embedding usability in your organisation
- 19:00 – 19:30 Stephen Chambers (Red Gate): Things we learned when redesigning the Red Gate website
- 19:30 – 20:00 Break
- 20:00 – 20:30 Jenny Cham (EMBL-EBI): Why did you click there? How to run 1-to-1 usability testing
- 20:30 – 21:00 Rob Kerr & Neil Turner (Cambridge Assessment) Remote User Testing 101
Talk 1: “Embedding usability in your organisation” – Michele Ide-Smith, Cambridge County Council
Michele started considering the user experience of the Council’s website back in 2006. They offered a very wide range of services and information via their website, and it could often be very confusing. The Council had very little idea about their users and their behaviour, and as such, they had to begin thinking about the perceived usefulness of what they offered.
So the “UX maturity” of the organisation was quite low at that time, but that has improved greatly over the last few years, and the Council now has an over-worked but dedicated UX architect on their staff.
Michele’s own development as a UX professional started when a revelation (watching a member of the public try and fail to use part of the Council website) became a passion. Still, part of the art of UX design is knowing when to apply techniques, and this comes from experience. Learn by doing!
- Start small, gain momentum
- Perhaps refer to “customers” instead of “users”
- Use tact and diplomacy to gain the confidence of others
- Battle skepticism and obstacles with evidence and data, rather than opinion
Face-to-face usability testing is hugely valuable to learn about behaviour, but if you need numbers and statistics… and especially if you want metrics, then there are all sorts of applications (frequently web-based) that can help you. Michele particularly mentioned screen coordinate-based clicktracker Crazy Egg (where do people click?) and unmoderated remote testing suite Loop11 (what do people do? Can they find things?)… although the latter is quite an expensive service. Favourites like Google Analytics and Webmaster tools can tell you where people came from and where they leave your site from.
- Set targets
- Compare before and after (and thus show the ROI of applying UX design methods)
- Encourage others to learn UX design skills… and use them!
- Communicate to your organisation
- Show what UX design techniques work in what circumstances
- Use personas… these can be very valuable resources
Clearly, the work that Michele and here team do start to overlap strongly with service design, because they are considering the channels by which the public (their “customers”) consume services and information, and what the main touchpoints are. That could also make for a really interesting talk, but we couldn’t go there!
Talk 2: “Things we learned when redesigning the Red Gate website” - Stephen Chambers (Head of UX at Red Gate Software)
The website had become unwieldy, fragmented and unpredictable.Different divisions within the company were presenting products in very different ways, and this inconsistency was becoming a real problem - “inconsistency can kill your brand“. Different divisions within the company were very protective of “their” products and areas of the website, so there was a lot of politics to overcome.
Eventually, Stephen and the UX team convinced the company bosses to allow them the time to deal with this problem (which was only going to get worse).They were given FIVE WEEKS to come up with a redesign.
The first thing to do was to tackle the politics.They were able to reassure stakeholders that they had a plan and that the would make the process transparent and as inclusive as possible (which definitely does NOT mean design-by-committee or top-down dictation).
They made use of existing data and evidence to influence design decisions, which gives a balanced approach to stakeholder requirements and user needs – a core feature of user-centred design (anyone sensible knows that it isn’t solely “all about the user”).
They block-booked a big meeting room, and that became their “war room“. The five members of the UX team literally moved in there for those five weeks, and no-one else could book the room. They had their computers set up, plus a large table for sharing work, sketching and discussion. The walls naturally became the places to store design artefacts. e.g. one wall for “inspiration”; another covered structure and architecture; another for mockups.
This provided a transparent and open view of the project. People could come in and be shown what was going on.Everything was in one place, and the team could work well together. Digital aretfacts were kept in a shared directory on the network, so that they could be accessed to shown to whoever needed them.
- During the first week, they didn’t go near any software (always a good idea!) – it was all brainstorming ideas and sketching.
- In the second week, they began creating digital lo-fi mockups in Balsamiq. These prototypes could then go through usability testing, etc.
- In the third week, the team split up so that two guys began working in parallel on the aesthetics, while the other three carried on making mockups and structuring the site and its pages.
Take home points:
- Gain unequivocal support from management
- The war room was hugely beneficial
- Don’t design only to solve current problems – think ahead
- Providing page templates to the rest of the company defused many problems (this is also the approach taken at University of Cambridge)
- Testing is difficult (due to time constraints?)
- Implementation of the new design was a BIG job, and required proper project management
Talk 3: “Why did you click there? How to run 1-to-1 usability testing” – Jenny Cham (EMBL-EBI)
Jenny talked about her experience of running face-to-face usability testing with scientists, and what she found worked best. Her aim was to share some tips for how to run this kind of quantitative testing, and to show that it is easy and worthwhile. A key point is that she tries to defuse any anxiety on the part of participants by avoiding the word “test” and talking about “user interviews” instead.
After an brief introduction to the EBI, and the kind of specialist services and applications that she has been responsible for testing, she went on to discuss some of the logistics and best practices of running tests. This included things like:
- her model for recruiting, screening and incentivising participants
- arranging a room and setting a schedule
- writing suitable scenarios and tasks, and having a dry-run test with colleagues
- setting the tone for the “user interview” sessions
She stressed that, although tests could have been run remotely in some cases, there was a great benefit to travelling to see participants, in that it is also a very positive outreach activity. While she is learning about users, they are learning about the EBI through her, and a relationship is built up.
Jenny also showed a short clip from a “user interview” session, which demonstrated the value of being alongside the participant for this kind of quantitative testing, where it is valuable to observe behaviour and reactions, and ask questions about the participant’s experience.
She recommended also:
- having no more than about 4 participants in a day
- allow an hour per participant
- have 20 – 30 mins time in between participants, to quickly gather and analyse notes
- be prepared to be tired!
Talk 4: “Remote User Testing 101″ – Rob Kerr and Neil Turner (Cambridge Assessment)
These guys did very well in the face of good, old-fashioned technical problems! They talked about and demoed some ways in which we can carry usability testing remotely.
Remote testing makes sense when face-to-face testing just isn’t practical. It may be too expensive; you may need to test with a large number of people (perhaps this is a quantitative study); participants may be geographically too widely distributed, and so on. It is also a good way to follow up on interesting issues and problems that have been uncovered in face-to-face testing, where it might be good to gather more data. It allows for deeper exploration of patterns across a larger number of participants, and thus lends itself to statistical analysis.
Rob briefly described moderated remote testing (sometimes called “synchronous” testing), where you, the test facilitator, are present at the same time as the participant, usually using software that allow you to talk to the participant, and sometimes to see their screen.
Neil talked to us about unmoderated testing (AKA asynchronous testing), where a test participant runs through a series of tasks without any guidance or moderation from the tester. To illustrate this, Neil gave a simple demo of the Loop11 online service, which allows you to write scenarios and task, set success markers (usually a page that the participant reaches, or a link that they click on); it also allows you to analyse typical paths that users take when performing a task. [as a side point, this would be a way a way to discover "red routes" or "desire paths" in your website]
This service, and others like, will do lots of the hard work for, you produce attractive statistical analyses and reports.