Tips for managing usability testing and recording results

Preparation and continuity

The first post on discount usability testing looked at organising a day of user research. The output of such a day is typically 5 or so screen recordings + verbal commentaries and maybe some notes. I don’t take many notes from being too busy following the script and attending to the subject’s verbal and non-verbal behaviour. Giving observers post-its to note down their observation is another means of recording results and engaging the team.

Preparing a subject involves describing the whys and hows of thinking aloud, and affirming how what they find difficult is as helpful as hearing about what works well. But commentating whilst trying to work something out it isn’t a natural behaviour. Encouragement and prompting are often needed, especially when a subject gets stuck or has to think. At which point reflecting back and asking what they’re trying to do, helps to clarify the issue and have them resume their commentary.

e.g.
Observer:  “Is that an appealing deal ?”
Subject:      reading silently
Observer:   “You’re reading that carefully,  what are you thinking, are you looking for something ?”,
Subject       “I can’t see if “hotel offers” include passes for the rides.”

N.b
reflecting back to user adds information to the recording that is useful for writing up.

 

Results and analysis

Reviewing and recording results takes headphones and about as many hours as was spent testing.
I usually log issues on a spreadsheet as I’m listening. The one below collates 6 users and chunks results by task and subtask. It doesn’t include any time stamped links to exact places on the videos as I don’t find such links are clicked often enough to merit the effort of adding them in the first place. But the full recordings/transcripts should always be available (subject to the terms of the consent form participants signed).

 

Recording results of usability testing in a table
A table that records and ranks issues, observations and ideas from usability testing thorpebreaks.co.uk

 

“Ragging” issues (red, amber, green) assesses their severity. It can be based on a number of factors e.g length of delay (impact), the number of times it was reported (frequency) *, if intervention was required to move user on through the task etc. Reviewing with just one assessor inevitably makes the evaluation  subjective, so having someone else involved helps to moderate, as well as publicise the findings.

*  In project management, a risk is traditionally assessed by multiplying its severity by likelihood

Using a spreadsheet to track tasks and record observations is one way to collate findings. Colour can improve layout and accessibility.  Additionally, if a report is needed, then the evidence can be cited by a single reference.

Also noting what was liked and worked well helps to balance the feedback and motivate the team.

 

analysing and communicating usability issues
Referencing and indexing issues, suggesting improvements and linking through to remedial tickets

Whilst dev’ teams might be familiar with JIRA, stakeholders are perhaps more comfortable with spreadsheets and presentations. The last column of the sheet above links through to remedial JIRA development tickets. Doing more in JIRA, a sprint’s user testing ticket can link directly to remedial coding tickets.

Self-organising teams do what works best for them.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *