Tips for managing usability testing and recording results

Preparation and continuity

The first post on discount usability testing looked at organising a day of user research. The output of such a day is typically 5 or so screen recordings + verbal commentaries and maybe some notes. I don’t take many notes from being too busy following the script and attending to the subject’s verbal and non-verbal behaviour. Giving observers post-its to note down their observation is another means of recording results and engaging the team.

Preparing a subject involves describing the whys and hows of thinking aloud, and affirming how what they find difficult is as helpful as hearing about what works well. But commentating whilst trying to work something out it isn’t a natural behaviour. Encouragement and prompting are often needed, especially when a subject gets stuck or has to think. At which point reflecting back and asking what they’re trying to do, helps to clarify the issue and have them resume their commentary.

e.g.
Observer:  “Is that an appealing deal ?”
Subject:      reading silently
Observer:   “You’re reading that carefully,  what are you thinking, are you looking for something ?”,
Subject       “I can’t see if “hotel offers” include passes for the rides.”

N.b
reflecting back to user adds information to the recording that is useful for writing up.

 

Results and analysis

Reviewing and recording results takes headphones and about as many hours as was spent testing.
I usually log issues on a spreadsheet as I’m listening. The one below collates 6 users and chunks results by task and subtask. It doesn’t include any time stamped links to exact places on the videos as I don’t find such links are clicked often enough to merit the effort of adding them in the first place. But the full recordings/transcripts should always be available (subject to the terms of the consent form participants signed).

 

Recording results of usability testing in a table
A table that records and ranks issues, observations and ideas from usability testing thorpebreaks.co.uk

 

“Ragging” issues (red, amber, green) assesses their severity. It can be based on a number of factors e.g length of delay (impact), the number of times it was reported (frequency) *, if intervention was required to move user on through the task etc. Reviewing with just one assessor inevitably makes the evaluation  subjective, so having someone else involved helps to moderate, as well as publicise the findings.

*  In project management, a risk is traditionally assessed by multiplying its severity by likelihood

Using a spreadsheet to track tasks and record observations is one way to collate findings. Colour can improve layout and accessibility.  Additionally, if a report is needed, then the evidence can be cited by a single reference.

Also noting what was liked and worked well helps to balance the feedback and motivate the team.

 

analysing and communicating usability issues
Referencing and indexing issues, suggesting improvements and linking through to remedial tickets

Whilst dev’ teams might be familiar with JIRA, stakeholders are perhaps more comfortable with spreadsheets and presentations. The last column of the sheet above links through to remedial JIRA development tickets. Doing more in JIRA, a sprint’s user testing ticket can link directly to remedial coding tickets.

Self-organising teams do what works best for them.

 

 

Preparing for 5

Understanding usability testing 101, describes how “discount testing” with 5 people and no lab goes a long way toward identifying usability issues. The practical benefits are: –

  • 5, 1 hour sessions constitute a good day’s work
  • Its simplicity facilitates the good practise of testing early and often

Of course there’s more to usability testing; but understanding a simple methodology’s strengths and limitations, and being able to do it, is a good start.

This post describes 5 practical considerations to prepare before a day of testing.

 

1. Stakeholder knowledge and opinions 

For an existing product, marketing and customer support will have a great deal of relevant information and be in touch with users. They are often motivated to help, and work in the same building.
That said, stakeholders who are already familiar with a product or service usually have pre-conceived ideas about how it should be improved. These need to carefully unpicked and substantiated. (Jared Spool talking about engaging stakeholders). Though their insight and cooperation are invaluable, being familiar with something can lead to subjectivity and away from the users’ perspective.

 

1. Identifying the target audience

Audiences can be segmented by demography (e.g. age, gender), access (e.g. computer literate, online, device), circumstance (e.g. fostered, attending a clinic), ability (language skills, literacy),

Webstats are useful for challenging opinions and informing how a site’s actually being used, and by whom. They also describe the audience’s location, device, OS, age, and even gender. But of course they don’t say much about prospective users in new markets.

Exemplar data from Google analytics
Exemplar data from Google Analytics

 

2. Recruiting and organising

There are market research and usability testing agencies who will recruit, select, schedule and track attendance, they will also conduct research for you.

When recruiting participants in-house, it helps to think about why someone might want to participate. Whether they might be motivated to improve the product or service for themselves and others, incentivised by vouchers and discounts, or happy to gain kudos by having their contribution acknowledged or recognised in a community. Advertising might mention such things, alongside describing what’s involved and how to enrol.

Screen recording software enables testing to be done remotely moderated or unmoderated. When setting up a programme of research, it’s helpful to first step back and think about the requirements, approach, constraints and outputs. A research plan needn’t be long, and can be updated in stages.

If research is done in-house, then think about a light CRM programme to capture subjects’ contact information, preferences, availability and participation etc. Depending on the level of engagement, quite a large feeder group might be necessary to get 5 people from the target demographic to regularly participate.

Recruiting and organising can take a lot of effort and participants might just attend one usability test. To maximise the return on your investment, it’s worth thinking about other types of research they might also engage with. Such activities (e.g. a survey) can also be useful for keeping people interested who might have signed up some time ago.

 

3.  Reminding

A few days before testing, it’s worth reminding participants about the time, location, travel options, recording setup, and any due care considerations (e.g. bringing someone along for support). This will probably be their first time, so an overview of what it’s about and what to expect might also be helpful: –

  • What their input will help achieve
  • What they need and needn’t bring
  • What’s the research will involve
  • Where it is, directions and who to ask for
  • A contact number in case of difficulty

Having subjects arrive in good time, and orientated of course helps the day run smoothly.

 

4.  Safety and due care

Safety confidentiality, consent, recording, chaperoning, and premature termination, are difficult to specify as they’ll vary according to the product and the environment, that said –

  • Recordings can be named and referenced by an anonymous but useful
    convention
    e.g. “HMRC_20170203_01”
  • Data protection legislation is relevant
  • Alongside screen activity, a dictaphone running all the time safeguards everyone and acts as a backup for the session
  • Participants usually quickly forget about observers in the room
  • If for whatever reason a session isn’t going well, It’s polite and reassuring to ask someone if they’d like to stop
  • Just being in a strange and unfamiliar environment is stressful, so building rapport in the run-up and when they arrive helps to relax and reassure them.
OMG
I guess we all know that feeling.

 

5. Scripts

If there’s time, script writing can be a “team sport”, though the end result needs to be simple, coherent and readily comprehensible. One advantage to the team collaborating on the script is that it fosters a sense of
ownership and so increases the likelihood of results finding their way into the codebase.

  • Clarity comes from editing and refining a script until it’s a clear
    narrative, that tracks a straightforward user flow and peppered with simple, intuitive,
    tasks and questions
  • A “slim” script helps everyone stay on task, whilst affording time to investigate
    interesting things that might arise
  • A simple script is more accessible especially given the average reading age of UK adults
  • Plus it’s hard to read from a page and observe someone
  • Here are some “Interesting things” about writing clearly that are also relevant to script writing
  • Encourage participants at the start to be critical e.g.  “We’re here to test some early ideas not you, so please be frank and speak as you find.”

 

Undertaking usability testing – a simple methodology

why-you-only-need-to-test-with-5-users Jakob Neilsen

In the late 90s Jakob Nielsen and Tom Landauer established 80% of usability issues could be identified by 5 users. So a day of usability testing, 5×1 hour sessions, has the potential to dramatically improve a site’s usability, and so it’s efficacy with achieving both user and business objectives.

To realise this, Nielsen devised Discount Usability Testing: a lite methodology that encourages participants to speak their thoughts aloud as they go about using a website to accomplish tasks. It can be performed relatively easily and with few resources, so fits well with the maxims “Test early and often” and, from Agile, “fail early, fail fast”.
It’s simple format can also help product teams engage with research.

There are of course caveats: Rolf Molich who worked with Nielsen in 1995 to define the heuristics used in expert inspections, here highlights how the findings from small samples can vary dramatically.

Larsen cartoon about phones not being designed for hoofed creatures

Something else to mention is that research needs to be planned i.e. described in a test plan that will define scope, practical considerations and integration with other research etc.

In a series of posts I’d like to explore how to –

– practical tips for Discount Usability Testing.