Challenges with designing local government services

Challenges with designing local government services

Terrain

A recent assignment with local government brought home the challenges of introducing digital, agile and service design into a traditional service provider; into an environment that’s neither digital first nor by default.

In the UK local statutory services have endured many years of restructuring and funding cuts, whilst demand has steadily risen and technology has evolved. Though digital brings opportunity, realising it isn’t helped by changing legislative requirements, traditional models of service delivery, continual reorganisation, waterfall project management, and technology outstripping procurement.

The terrain is complex:

  • complex processes
  • legacy systems
  • very many stakeholders
  • commercial contractors pursuing their own agendas
  • traditional ways of working (waterfall)

Lined up against these are caring public servants armed with workarounds, special relationships, and decades of experience.

 

The model

Gov.uk must have faced similar and even greater challenges: which can be explored in A GDS story (Government Digital Services)

How gov.uk looked back in 2011
Direct gov 2011

The Gov.uk service manual describes a staged gate process for developing and delivering digital services. Checkpoint assessments need to be passed before a service can advance to the next phase. Part of the role of this external assessment is to cut through any internal agendas that might be adversely affecting development. It also helps ensure that everything that should be done has been, or else there’s a good reason why not.

Particular STAGES OF GDDS ASSESSMENT
Within these phases teams are expected to be working in agile sprints, continually testing and iterating.

 

Lessons learnt

With so much already done toward digital government, there’s arguably little to be gained from reinventing the wheel without good justification. For most services it’s likely something similar is already out there which discovery research should identify. There are 418 principal councils, employing over a million people and with a budget of £44bn, all serving similar needs (local democracy think tank).  How then could it possibly make sense to start from scratch with how a citizen should report a pothole or which responsive grid to use?
Any kind of research needs to start by looking at what is already known and been done: it helps to move thinking forward and saves time.

But even with a good example to follow, a lot of work remains. Making a wheel fit another bike can be a challenge: planning, critical assessment of resource and requirement, decisiveness around what to lose, keep or adapt, iteration and patient fettling. None of which is as glorious as having one’s own idea adopted, but then in Edison’s words, “Genius is 1% inspiration, 99% perspiration”.

 

Social constructivism and “team sport”

Social constructivism is a sociological theory that describes how meaning is negotiated and shared by collaborating upon artefacts, and generally doing things together. It’s a helpful approach to take with user research.
Research is “academic” (not practically relevant) if it doesn’t tangibly affect a service or product: if findings aren’t reflected in the codebase. Involving the team mitigates against this: collectively formulating questions, preparing what’s to be tested, observing research and analysing results together e.g. contributing to scripts, artefacts, analysis and collectively deciding what to do as a result.

 

 

Jared Spool is a renowned UX professional and pundit who advocates for the importance of engaging teams in research (observing) He believes it is the single most important factor in delivering good user experience. A position supported by the GDS mantra: “User research is a team sport“.

 

Summary

This post aimed to outline the challenges to modern service design in local government and went on to suggest how they might be addressed –

  • adapting rather than inventing
  • adopting GDS’ staged gate model of service delivery along with agile
  • teams collaborating around shared artefacts
  • and participating in research
  • organising to support ongoing service development

It acknowledges that whilst digital might not obviously help with mending potholes (one of the most frequent inquiries), every service will increasingly have digital components.  Importantly with so many examples of similar issues having been already addressed by other authorities, there is little point in reinventing the wheel.

A language of visual design – the grammar of digital engagement

A language of visual design – the grammar of digital engagement

Researching for a county council led me to look at other local government websites. I was especially interested to see if there was a structure to their visual design, if they used a visual language.

Unlike Gov.uk which is very uniform and accessible, local authority sites vary a lot.
I guess that might be because of –

  • diverse requirements – organisational, political, and user (from reporting a pothole to calculating mum’s financial contribution to her care)
  • numerous owners/stakeholders/agencies,
  • legacy issues (these sites have been around a long time)
  • scarce resources
  • digital teams deriving from IT, print and corporate comms’

And although citizens often come with well known, specific purposes (transactional and informational), they bring with them widely differing-

  • mental models
  • digital skills and capabilites
  • accessibility and literacy needs
  • devices and IP connections (e.g. public library, pay as you go, pstn)

Watching people try to use a “difficult” site, reminds me of ordering from an unfamiliar menu written in a different language.

Without delving too far into semiotics and semantics, sites can frustrate because they fail to communicate in a recognisable way. People are often busy, impatient, easily distracted, first-timers, restricted to pointing and swiping on their phones, so is it fair or reasonable to expect them to learn a new language?  Arguably the triumph of Apple was making the Iphone accessible to so many, people straight out the box; a computer many times more powerful than those used to launch the Apollo moon shots. Achieving that level of usability takes skill, dedication, and user centred design.

 

Nature.com – “Mosaic”

Mosaic, visual language
The first implementation of Mosaic – nature.com’s first visual language

Whilst working for a scientific publisher we introduced a visual language for digital that was quite different to the conventions and workflows used for analogue (print and telephone).
A creative agency were commissioned to produce a new look and feel which the UX and design team subsequently applied and refined across the domain. We knew the header, top level navigation and some page layouts, but not the detail –
e.g. the complete IA, navigational patterns, typography and how content should respond across viewport sizes.

We didn’t know how a published article should be presented in full, summarised in front of the pay-wall,  precised in a “rolled-up” section, or referenced in a news email on a phone.
As we implemented and iterated, we created a pattern library and style guide, the basis of a visual language (vocabulary and grammar):  a set of formatted elements (content containers) and rules for such things as –

  • presenting content in different contexts
  • “responsiveness”
  • typography, breakpoints and imagery
  • the information architecture
  • the “voice”
  • accessibility
  • naming and formatting conventions,
  • SEO mark-up…

Structure

The discrete structure of the language suggested a modular approach which facilitated research and iteration. As the old joke goes:
“How do you eat an elephant?”
“One spoonful at a time”.

Going about a visual language a visual
Launching Nature.com’s visual language to the internal audience

Five years after Nature Plants was launched, the visual language is still evolving and being deployed. The former because that’s just what living languages do, the latter because it exists in a complex environment with legacy systems. By now there’s a well trodden path for migrating journals.

If visual design and content are, structured, straightforward, consistent and conventional, users can focus on their task. Simple heuristics, such as the logo positioned in the top left hand corner and linking to the homepage, are a starting point, perhaps similar to the first words learnt from a tourist phrase book. But visual languages can be rich and complex e.g. in medieval religious paintings where symbols and conventions communicated complex narratives and abstract concepts to illiterate audiences. The challenge is for the language to be so straightforward and familiar it’s intuitive –  in Steve Krug’s famous words, “Don’t make me think“.

It might seem a luxury to take time out to create an abstracted guide, but from experience it quickly repays the investment. One is evident in East Sussex county council’s site. (go beyond the landing page) and is perhaps why GDS look to be getting more involved with local authorities.

Formative research – techniques and tips

Formative research – techniques and tips

The post “Different types of research“, categorised user research as either formative or summative
(aka generative/evaluative): –

  • Formative investigates environmental and human factors, constraints, opportunities, behaviours, requirements, and objectives.
  • Summative evaluates the performance of designs and prototypes in meeting those objectives

 

The purpose of formative research is to inform

For example, a pop-up survey for a government savings scheme (n=14), suggested people under report their savings habit. Four people who said they didn’t save, regularly put aside their spare change. Although that behaviour isn’t exactly saving, it seems relevant. Another study for the same scheme found :-

  • 63% of savings accounts were opened by women
  • The majority of account holders lived with a partner

Insights like these are useful for marketing, advertising, functionality (e.g. making it easy to deposit and pay-in change), user journeys, support etc.

 

Common formative methodologies and techniques

These are my favourites. To help decide which to use, I anticipate the kind of results they’ll likely produce and discuss with the team how useful thy’d be and the resource they’d take  –

  1. Surveys –
    These can be face to face, remote, online, postal or conducted by agents. They can cover large sample sizes (n) to deliver statistically meaningful (quantitative) results
  2. Card sorting –
    Typically used to define information hierarchies and navigation
  3. Contextual interviews –
    Participants are visited and questioned in their own environment (e.g. at home or work), which is more likely to trigger memories and naturalistic behaviour
  4. Focus groups –
    Useful for exploring themes and normalising e.g. asking “What does everyone else think about that ?”
  5. Ethnographic research –
    Participants are observed in their own environment. This has the advantage of revealing what people actually do, as opposed to what they say they do, and how something is actually used. If participants aren’t aware they’re being observed e.g. by using a remote camera, this methodology can address the “observer effect
  6. Diary studies –
    Though it can be a challenge to recruit and engage suitable participants, diary studies have the advantage of running over longer periods so they can evaluate longer processes, and changes nb scheduling regular entries can be helpful
  7. Design the box (participatory design)
    a creative exercise that uses the idea that packaging should convey essential aspects of a product
  8. Data mining –
    Data is increasingly being collected and archived e.g. the UK data archive at the University of Essex and the Office of National Statistics (ONS), Citizens Advice Bureau, and much of it’s freely available

 Methodologies which can be both formative and summative ?

  1.  Wizard of Oz –
    e.g. testing concepts or call centre scripts
  2. Comparative evaluation of journeys and low-fi prototypes –

The research plan

Plan for each phase and update the plan

A research plan can be one of the outputs of a design sprint, when there is one.  If not, it’s still helpful to write it as soon as possible. To be accessible to both team and stakeholders, it can be a brief document: two sides of A4 , the first with headings and bullet points, the second a schedule of activities. The process of editing it down to such a short format will help to deepen one’s understanding and prepare for questions that might arise.

Schedule of research activities to synchronise with sprint cycle

A time plan of research activities synchronised to sprint cycle and linking to JIRA tickets.

 

A schedule of research activities from another project synchronised to Gov Digital Services’ stages of development

 

Bitesize chunks of information and graphics are more accessible than prose heavy paragraphs, and though editing can take as much time as writing,  your readers will appreciate the effort.

The research plan is the first deliverable – but what’s it for ?

I find it helps with:-

  • “Setting out your stall ” (a metaphor for stating position, approach and intentions)
  • Outlining the: scale of the programme, research activities, number of participants, key contacts etc.
  • Scoping resources
  • Anticipating challenges
  • Publicising the research schedule and relevant milestones
  • Reassuring everyone that research is underway and fit for purpose

Consulting or seeking input from team, stakeholders and even users, when writing a plan helps to:-

  • Stimulate and enrich your thinking
  • Make the process an exercise in participatory design (co-design)
  • Spread awareness of research
  • Obtain resource
  • Pave the way for results making an impact
Conference delegates
Conferences offer opportunities to research and can also inform its direction

To keep ahead it’s worth thinking about starting lengthy tasks as the plan’s being written,
tasks such as:-

  • Establishing a “lite” contact and CRM programme for onboarding and tracking research participants
  • Obtaining budget for remuneration and expenses
  • Defining policies e.g. confidentiality, data protection
  • Working on the questions and artefacts that’ll be used for the first sessions

Another consideration is when to update the research plan. The Gov.UK defines four stages of service development:-

  • Discovery
  • Alpha
  • Beta (private and public)
  • Live

Revising the plan before each is helpful as the research requirement will evolve from being more formative to more summative.

Tips for managing usability testing and recording results

Preparation and continuity

The first post on discount usability testing looked at organising a day of user research. The output of such a day is typically 5 or so screen recordings + verbal commentaries and maybe some notes. I don’t take many notes from being too busy following the script and attending to the subject’s verbal and non-verbal behaviour. Giving observers post-its to note down their observation is another means of recording results and engaging the team.

Preparing a subject involves describing the whys and hows of thinking aloud, and affirming how what they find difficult is as helpful as hearing about what works well. But commentating whilst trying to work something out it isn’t a natural behaviour. Encouragement and prompting are often needed, especially when a subject gets stuck or has to think. At which point reflecting back and asking what they’re trying to do, helps to clarify the issue and have them resume their commentary.

e.g.
Observer:  “Is that an appealing deal ?”
Subject:      reading silently
Observer:   “You’re reading that carefully,  what are you thinking, are you looking for something ?”,
Subject       “I can’t see if “hotel offers” include passes for the rides.”

N.b
reflecting back to user adds information to the recording that is useful for writing up.

 

Results and analysis

Reviewing and recording results takes headphones and about as many hours as was spent testing.
I usually log issues on a spreadsheet as I’m listening. The one below collates 6 users and chunks results by task and subtask. It doesn’t include any time stamped links to exact places on the videos as I don’t find such links are clicked often enough to merit the effort of adding them in the first place. But the full recordings/transcripts should always be available (subject to the terms of the consent form participants signed).

 

Recording results of usability testing in a table
A table that records and ranks issues, observations and ideas from usability testing thorpebreaks.co.uk

 

“Ragging” issues (red, amber, green) assesses their severity. It can be based on a number of factors e.g length of delay (impact), the number of times it was reported (frequency) *, if intervention was required to move user on through the task etc. Reviewing with just one assessor inevitably makes the evaluation  subjective, so having someone else involved helps to moderate, as well as publicise the findings.

*  In project management, a risk is traditionally assessed by multiplying its severity by likelihood

Using a spreadsheet to track tasks and record observations is one way to collate findings. Colour can improve layout and accessibility.  Additionally, if a report is needed, then the evidence can be cited by a single reference.

Also noting what was liked and worked well helps to balance the feedback and motivate the team.

 

analysing and communicating usability issues
Referencing and indexing issues, suggesting improvements and linking through to remedial tickets

Whilst dev’ teams might be familiar with JIRA, stakeholders are perhaps more comfortable with spreadsheets and presentations. The last column of the sheet above links through to remedial JIRA development tickets. Doing more in JIRA, a sprint’s user testing ticket can link directly to remedial coding tickets.

Self-organising teams do what works best for them.

 

 

Visualising_data

“More time is spent researching, analysing results and gleaning insights, than communicating the results” – discuss.

That detracts from the impact of research, and doesn’t help answer the important question – “So what?”.
Visualising_data helps to –

  • uncover the not so obvious
  • deepen understanding
  • communicate findings
  • and engage stakeholders.

Visualisations can be wonderfully imaginative, with big data only adding to the wow factor. But UR usually produces small datasets, and low tech’ visualisations.

Ways of presenting small data –

A questionnaire that asks Likert rating questions (strongly agree… strongly disagree), yields quantitative data suited to bar graphs.

Visualisation of the results from a Likert survey question in a bar graph
Visual summary of survey results to a Likert rating question – http://peltiertech.com/diverging-stacked-bar-charts/

Quantitative data helps put such results in perspective and give an overview. However,insights can still be gained, when the sample size (n) isn’t large enough to be statistically meaningful.

The graph below didn’t result from a large longitudinal study, but a small contextual inquiry around hospital appointment cards. Nonetheless, it illustrated a holistic, patient centred view, of multiple service delivery which primary care staff hadn’t seen before.

Visualisation of patient contact with health services
Visualising patient contact with health services over an 18 months period – horizontal lines represent referrals.

Another bar chart shows the results of rewording the questions of a satisfaction survey. The new instructions were understood more quickly and the last, free text question invited longer answers.

Comparison of time taken to complete two satisfaction surveys with slightly different wording
The effect of re-wording the questions (x axis) of a short satisfaction survey upon, completion time (y axis).

 

Taking down a wall of sticky notes – visualising_data

The patent on sticky-notes apparently expired in the 90s, some time before UX took off.
That’s 3Ms loss, as user research consumes vast quantities for collecting and visualising_data.

Post it notes stuck to a wall

What to do with a great mosaic of comments ?

A wall of sticky notes that’ve been so carefully written, sorted and arranged, that represents a productive, collective experience, can be difficult to work with later.
When left up for too long, they becomes invisible, overlooked, giving the impression the work has not progressed and the information is no longer referred to.

But photographing makes the data even less accessible and meaningful. Walls of stickies can become albatrosses, flaking remnants of yesterday’s workshop which no one feels empowered to take down.

One option –

is to digitise the data by collating it into a simple spreadsheet or database. Even when statistical analysis isn’t possible, digital is arguably more accessible, durable and suited to analysis. On the example below, the number of times the same comment was made (top x-axis) is represented by font size.
e.g. ‘Work Experience” (mentioned by 12 people), has text proportionately larger than “university visits” (4 ).

This format, used to present the output of a large workshop, then visualises the extent to which attendees agreed with different suggestions.

Further information comes from the y axis which groups responses by benchmarked activity.

Visualising data - A poster made from information captured on sticky notes from a workshop
An A1 size poster that scales and collates the output of a large workshop (n ≈ 200)

Another version, here summarising some of the outputs from a discovery phase, presents stakeholders on the y, and the stages of a process along the x axis, with anecdotal comments being positioned according to who made them.
The poster is a flexible format that doesn’t always require graphic design.

A poster that collates and present stakeholder feedback
Associating feedback with different stages of a process (x) and different user groups (y)

Different types of research

Who researches what

In the world of UX and UR, research can be crudely  categorised into –

  1. Formative – What’s useful
  2. Summative – What’s useable

The first helps to create the product by investigating its context, environment, user needs and constraints etc, and the second evaluates the performance of designs in meeting those requirements.
This article on Userfocus has more detail about both.

This Venn diagram below is an attempt to categorise activities by discipline.  Involving and collaborating with other disciplines helps to –

  • Stretch resources
  • Improve the chances of findings being acted upon
  • Enrich practise

 

A Venn diagram of research activities and outputs typically undertaken by marketing, UR and academics
Categorising research activities by discipline

Analogue, digital, visibility, affordance and Klingons

It might not zap Klingons but I know what to do nonetheless

Title printed by Dymo

If you don’t remember the original Star Trek, you’ll probably not recognise the quote or think the green Dymo printer resembles a phaser gun.

comparing the affordance of digital and analogue Dymo printers
“Where no Dymo printer has gone before”

The idea for this post came from realising the thing sitting on the next desk wasn’t a scientific calculator, but a modern digital version of the old analogue tape printer I’d been given as a Xmas present back in the day. It made me think how design can suggest function and operation, trigger (groan) an emotional response, and make the every day a little more interesting.

The old Dymo has two triggers: one to impress the print head onto the tape (green), the other to cut the label from the roll (white). The visibility of the white tape cutter is better because of its contrasting colour and raised edge. Its affordance is also better, because it’s trigger shaped and located where you’d expect a trigger to be in relation to the handle. That said, whilst I know what to do with it, that it cuts the tape isn’t obvious. So on balance, most of its design helps to build an accurate mental model of its operation.

Recognising complexity

A high level task analysis for making a label might look like this:

  1. Ensure tape is ready and lined up
  2. Select appropriate letter
  3. Impress print head onto tape
  4. Advance tape
  5. Repeat 2 – 4 until finished
  6. Cut tape

That’s a reasonably complex process, but one I quickly learnt through simple experimentation with the analogue printer. It’s not a complicated device and uses familiar controls: triggers are for squeezing, knobs and wheels for turning etc.

The digital version has only buttons to press (good affordance) so interacting with it’s controls is simpler; but it’s operation is arguably more complicated. Its form does not inform a mental model of what it does or how it works.

So fewer kinds of controls don’t necessarily make something easier to use. I also wonder how accessible the digital version might be to a user with: difficulty distinguishing visual detail, impaired fine motor control, or some cognitive deficit ?

I think most people would take longer to get a label out of the digital version, and find it less satisfying,
but it might well be the best choice if you have 50 to make.
As Don Norman observes in his book Living With Complexity: “complex is different from complicated”.

A checklist for evaluating webforms

Heuristics

At some point, looking for a definition of “heuristic” turns up: “rule of thumb”, which might or might not be helpful depending on whether you know that means “rough and ready”, widely accepted, approximations that are “good enough” etc.

Practically, when evaluating usability, there doesn’t seem much point in booking a lab and going to all the trouble of calibrating the eye tracker and analysing traces in order to evidence that something like a “spinner” will mitigate against user pressing the “submit” control again whilst she waits for the system to process an input. (Neilsen’s first heuristic: “Visibility of system status“).

Neilsen goes on to list a further 9 considerations which can be used to structure “expert”, heuristic reviews.

 

More heuristics

If 10 aren’t enough, here’s a further 23 more I gathered to evaluate webforms, but which are also relevant elsewhere : –

  1. Is user sufficiently orientated and motivated with what’s about to happen ?
  2. Will user have ready everything they’ll need before they start ?
  3. Does user know how long the task will likely take them  ?
  4. ask for the minimum amount of information ?
    (organisations tend to ask for more than is necessary)
  5. Is security addressed ? e.g. Norton security logo data protection policy
  6. Is privacy addressed ? e.g. an accessible and available policy
  7. Are there any distractions from the userflow ?
  8. Is it feasible to complete the form in one sitting ? If so let’s try.
  9. If not, does it support user picking up as near to where they might’ve left off (e.g. have meaningful breakpoints in the process)
  10. Is the process broken down into sensible blocks ?
    e.g. personal information, choice, transaction details, payment, summary, next steps
  11. Is positive feedback given ? e.g. indication of progress, validated form fields
  12. Help users to understand terminology and references, and use accessible language (the average reading age is surprisingly young)
  13. Is copy tight and concise ?
  14. Are calls to action clearly visible, consistent in appearance and behaviour ?
  15. Are fields grouped into sensible, meaningful sections and do they accord with convention ?
    e.g. Block 1 – marital status, name, address, phone, Block 2 – …
  16. Are help links and resources visible and relevant to their context ?
    (and for accessibility, do they rely on proximity to provide their meaning)
  17. Is form error feedback immediate and inline ?
    e.g. feedback needn’t wait for the submission button
  18. Though some forms are better done on a desktop,
    should that preclude responsive design ?
  19. Is support flexible to meet different needs and levels of proficiency ?
    e.g. a tip by a form field with a link to further information
  20. Avoid pop-ups (interstitial screens) they’re seldom for users’ benefit.
  21. Does submission trigger a reassuring, engaging welcome/thank you email including next steps and anticipating user needs ?
    e.g. “Thanks for submitting your application –
    would you like us to email you a summary ?
    Please contact us if you don’t hear back from us within two weeks.”
  22. Post registration, when they return and have maybe forgotten their login and what the’ve previously done, is it easy for them to get in and pick-up ?
    e.g. login with other existing credentials (OAuth)
  23. Is account setup and recovery straightforward
    e.g. email with links to p/w reset

Caroline Jarrett is an authority on form design, and UXplanet.org and Smashing Magazine have published these articles on the subject.