Cross-government design meeting #33: Measuring success – 18/1/22
The cross-government design community runs great events. Here are some highlights from my notes from last week’s session on measuring success.
Measuring the value of service transformation – Matthew Lyon, Head of Economics and Analysis, Central Digital and Data Office (Cabinet Office)
Reform of the Animal Licensing service led to a 50% reduction in processing time, and a 30% reduction in time for FOI requests. 6,600 hours saved.
User satisfaction increased 68% to 77% between 2018 and 2020.
Prepare to raise a child service – saved 12 minutes per user. As they have over c650,000 users, that’s about 125,000 hours saved.
Department for Transport research suggests that we value leisure time at about £5-7 per hour, so you can get a £ value for time saved.
The cost of failure demand: measuring the impact of poor user experiences at HMCTS – Aliane Alves, senior service designer, and Sam Brierley, Head of User-centred Design at HMCTS
“A problem for a service almost always becomes a problem for the organisation providing the service”. Stuff like avoidable contacts, unsolicited inbound emails, rejected applications, staff doing data entry, lots of manual checking and cross-referencing.
Easiest way to understand failure demand is to do contextual research with support centre staff as they are usually at the receiving end of it.
The team came up with a snappy meme to help people think about the cost of failure demand: “A typical CTSC caseworker’s time costs 50p a minute” This soundbite was much more effective than showing people a spreadsheet.
41% of calls to the Apply for probate service were from applicants wanting an update on their case. This failure demand cost £40,800 a month.
They wanted to reduce this to 20% of calls. So they started using GOV.UK Notify to tell people about the status of their case. This cost £37,000 over 8 weeks (one content designer, one business analyst, one tech lead, and one developer).
They came within 3% of their goal. (For all phone calls, they record: who, what, why, and did a quant and qual questionnaire with call agents)
Improving the quality of user feedback collection on GOV.UK – Jeremy Yun, Senior interaction designer, GDS
Feedback comes to GOV.UK through several different channels and formats. The team mapped out the different ways that it’s collected and used.
They’re simplifying the frontend survey, helping people classify their feedback (to help downstream use: “can’t find”, “don’t understand”, “doesn’t work”, “other”), simplifying how information is collected and stored, using data science to help automate feedback analysis, and reviewing how we distribute feedback.
Designing and improving the TfL Go app – Hannah Kops, Head of Experience, and Dan Bean, senior product manager, Transport for London
The vision is: “A personal travel assistant for everyone in London, which helps you to make the right choice at the right time, and provides TfL with the insight to keep London moving”.
They’ve been using the pirate metrics for their MVP:
- Awareness – app store impressions
- Acquisition – downloads
- Activation – complete onboarding
- Retention – BAU, opens per day
- Referral – telling friends, (app store ratings?)
- Revenue – money per user
Impressive focus on accessibility from the start.
Solve deep needs, not superficial wants with Top Tasks – Gerry McGovern, author and consultant
Focus on designing for the top tasks that users have- typically 3-5 tasks account for 25% of all user activity.
You can work out these tasks by outlining a long list of about 50, then asking users to choose their top 5 from a randomised list. That gives you a league table.
Then benchmark and work to improve:
- The success rate
- The time to complete