Interoperability, Open Standards and APIs

I recently attended GDS Tech Talks: Interoperability and Open Standards. Here are my notes on the sessions I attended.

What’s new in Web Standards? – Dan Appelquist

Dan is Samsung Internet Director of Web Advocacy, co-chair of the W3C Technical Architecture Group and Open Standards Board member.

Connections between the web and your device: 

  • WebNFC
  • Web Bluetooth
  • Web USB
  • Serial API
  • Gamepad API
  • SMS Receive API
  • Contacts API
  • Clipboard Access APIs
  • File system APIs

A lot of these need privacy implications thinking about. e.g. clipboard access API could be done without the user knowing what was being done.

New features that make the web able to develop a richer experience:  

  • Progressive Web Application functionality: Manifest File, Service Worker, Push Notifications, Badging API
  • WebXR
  • Web Assembly
  • WebGL
  • Web Payment

New layout capabilities: 

  • CSS Grid
  • “Houdini” APIs (let you script style within the javascript layer)

New communications capabilities:

  • WebRTC
  • WebSockets
  • Streams
  • Web Transport

Enhancements to Web Security:

  • Feature policy
  • Signed Exchanges
  • Packaging

What makes the web ‘webby’?:

  • Open
  • Linkable
  • Internationalised
  • Secure and private
  • Multi-stakeholder
  • Does not require a single authentication method or identity
  • Zero friction

What makes the web open?

  • Built on open standards
  • Based on user needs
  • Transparent and open process
  • Fair access
  • Royalty free
  • Compatible with open source
  • Multiple implementations

What makes a standard open?

  • Collaboration between all interested parties, not just suppliers
  • Transparent and published review and feedback process. Wide review is crucial when thinking about new standards. W3C look at new things and think about them on GitHub

When building new specifications, how do we make sure that they are ethical?

W3C have a privacy and security questionnaire that they encourage everyone to work through when working on a new specification

Some ethical frameworks:

You can get involved in Open Standards by joining a community group or working group.

API for Humans and the Machines – Kin Lane

Kin is General API Evangelist, and Chief Evangelist at Postman

You should follow a design process when creating an API.

That way you can tease out assumptions, and test value before carrying out technical development work. Iteration is quickest and cheapest when you do it before you write any code!

Start off by defining an endpoint for the API, the values that you’d want to send to it, and giving an example of what you expect it to return.

You can use tools like Postman, and publish this test API, giving you an endpoint for testing, and letting people try working with it.

With OpenAPI, Swagger and Postman, you can publish your documentation from the code.

Some things your API should have:

  • A choice of response format. Don’t just give JSON – let people receive CSVs if they want them. This makes things more open to non-developers.
  • A management layer: access keys, rate limits. Apigee, Tyk,io, Mulesoft and Kong are tools for this.
  • Governance
  • Monitoring
  • Automated testing in your Continuous Integration / Continuous Deployment pipeline
  • Security testing
  • A clear point of contact for support
  • A plan for communications. Announce your API and new versions. Explain the purpose and what’s changing. You should have a comms strategy around every release. Without evangelists and communications, your API won’t last.

Build a food bank API – part 3

Read part 1 or part 2 of this project to find out the background to this post.

The API now exists online.

Here’s an example request for the nearest 5 foodbanks to latitude 52.629958, longitude 1.298408. Results are returned in JSON format, which is machine-readable. I have plans for a human-facing experience too.

The service is running on an entirely renewably-powered hosting provider in Switzerland. The extra time taken to communicate with a server a bit further away isn’t significant.

Here’s what I think the next steps are:

  • Create a clear, professional homepage (rather than just a line of text!), so that new users have an idea of the purpose of the API, and so that it looks credible.
  • Produce documentation on how to use the API, so that developers understand how to interact with it.
  • Create a web page that uses the API, taking a user’s location and showing them the nearest foodbanks and what they need. Having built the API, this feels like a natural next step. Anyone who goes to this page will be able to find out which food banks are near them, and what items they need.
  • Tell people about it, so that developers can start using the API, and people can start using the service to find out what items their local foodbanks need them to donate. We’ll have two minimum viable products – one API, and one human-facing service – and it’ll be time to find out if there’s any interest in using them.

I’ll be collaborating to make the above happen, which is exciting! We’ll be doing some user testing as well, to see how people use the API and documentation.

Build a food bank API – part 2

I’ve made great progress on this work in the last couple of months. (Read part 1 of my project to create a foodbank API)

My goal was “Make an API that, for a given geolocation, returns the nearest 3 foodbanks, with a list of the items that they need.”

I’ve achieved this for something running locally (i.e. just on my computer, but not on a web server that anyone could access). You can download the code and follow the instructions to run it yourself, if you have the Python programming language installed on your computer. I actually went slightly further than planned – you can specify the number of foodbanks you want to see, and you can also find out the items needed by a given named foodbank.

The next step is to get it running online so that anyone can use it.

I’ve been testing the risky assumptions

  • If I know the URL of a given foodbank’s page on food donations, I can work out what items they need.
    Yes. I’ve written code to do this.
  • All Trussell Trust foodbanks follow the same way of organising their websites.
    Mostly. About 9% of them don’t follow the standard format.
  • All Trussell Trust foodbanks follow the same way of describing the items they need.
    As above.
  • I can access or somehow generate a comprehensive and accurate list of all Trussell Trust foodbanks.
    Yes. I stumbled across this in the HTML on the Trussell Trust’s Find a Food Bank page. I can get this list with a single GET request.
  • If I have a list of Trussell Trust foodbanks I can straightforwardly work out the URLs of their pages describing the items they need.
    Mostly, yes. I’ve written code to do this.
  • I can scrape the information I need from the relevant server/servers in a courteous way
    Not sure yet. I assume all of the Trussell Trust’s standard sites are hosted on a single web server. I make a single GET request to get the names and URLs of all the foodbanks, but each ‘items needed’ page is a separate request. I’ve included a pause between each request, but I don’t know if it’s too long or too short.
  • It won’t be very difficult to build a data representation of food banks and required items, or to store this in an appropriate database.
    This was quite straightforward. And I didn’t even need a database, as I’m going to hold all the information in memory and not manipulate it.
  • Building and running the API won’t be too much fuss. (Or, less concisely: It’s possible to build a lightweight, modern infrasturcture to host a database for this API and serve requests without too much complexity or cost.)
    I’ve built an API that runs locally. Hosting it online as a real webserver should be reasonably straightforward. That’s the next step. I’ve found an entirely-renewably-powered web host, which might help me meet my extra goal of running this API entirely renewably..

Read part 3 of this project to find out what I did next.

ODI Summit 2019 – summary

A summary of the sessions I attended at the Open Data Institute’s Summit on 12 November 2019

Tim Berners-Lee and Nigel Shadbolt, interviewed by Zoe Kleinman

Tim Berners-Lee described commercial advertising as “win-win”, because targeted advertising is more relevant. But “political advertising is very different… people are being manipulated into voting for things which are not in their best interests.”

Nigel Shadbolt: There’s a risk that people just move on to new shiny things. Creating a common data infrastructure is unfinished business.

Berners-Lee: We should be able to choose where our data is shared, rather than it just being impossible because systems can’t speak to each other. “You can share things with people that you want to share it with to get your life done.”

Shadbolt: Data sharing has to be consensual. Public data shouldn’t be privatised. We need transparency and accountability of algorithms used to make decisions on the basis of data. Platform providers are controlling and tuning the algorithms.

Berners-Lee: How might we train algorithms to feed us news that optimises for ‘aha’ connection moments, rather than feelings of revulsion?

Kriti Sharma – Can AI create a fairer world?

If you’re building tools with data, the biases of that data are perpetuated and potentially amplified, which can worsen existing inequalities. e.g. access to credit or benefits, or deciding who gets job interviews.

  • Early on in a design process, think about how things could go wrong.
  • Train machine learning or AI on more diverse datasets.

An MIT test of facial recognition found an error rate of 1% with white-skinned men. For darker skinned women, the error rate was 35%.

  • Build diverse teams. Only 12% of the workforce on AI and machine learning are women. A more diverse team is more likely to question and correct biases.

Data Pitch accelerator

A EU funded accelerator, connecting public and private sectors to create some new data-driven products and services. A 3-year project.

28 data challenges, 13 countries.

4.6 million euros invested
14.8 million euros “value unlocked” – additional sales, investment and efficiencies. These are actual numbers, not optimistic forecasts.

datapitch.eu/datasharingtoolkit

How do we cultivate open data ecosystems?

Richard Dobson, Energy Systems Catapult
Leigh Dodds, Open Data Institute
Rachel Rank, Chief Exec, 360 Giving
Huw Davies, Ecosystem Development Director, Open Banking

Energy Systems Catapult:
If you want to move to renewable energy, you need to know what’s produced, where, and when.

So BEIS, through a Catapult scheme, set up a challenge on this. Seamless data sharing was crucial.

360 Giving:
Help grant makers open up their grant data in an open format so people can see who is funding what, why, and how much.

Open Banking:
Catalysed by regulation from the Competition and Markets authority. UK required largest banks to fund an implementation entity, to make sure it was effective and standards-driven to set up a thriving ecosystem. So they worked on standards for consent and security. Every 2 months the ecosystem doubles in size.

When encouraging people to contribute to an ecosystem, show value, don’t tell people about it.
Don’t talk to people about organisational identifiers. Show them why you can’t see their grants alongside the other grants because they haven’t been collecting these. People had such low insight into what other people were funding, that this was very compelling. Make people feel left out if they aren’t sharing their data.

Thoughts on making a healthy ecosystem:

You need standards for an ecosystem to scale

Accept that even with common standards and APIs you’ll get a few different technical service providers emerge, then people emerge who add value on top of this. (This was the experience in Open Banking)


“You can’t over-emphasise the importance of good facilitation at the heart of the ecosystem”
(I took this as: you need investment from somewhere to make this collaboration happen)
Open Banking did lots of work to collaboratively set up standards that everyone bought into. And they did lots of work facilitating and matchmaking to get people working together, to understand each other and provide more value.

Need to move away from just thinking about publishers and consumers. Think about the ecosystem more widely.

“When great stuff happens, shine a light on it and celebrate it”

Don’t pre-empt your users. They’ll surprise you.

Work out a way to police/protect data quality without having a single point of failure

Don’t aim for perfection, aim for progress
Start with what you’ve got. Perfect data doesn’t exist.

Caroline Criado Perez – Invisible Women: exposing data bias in a world designed for men

[This was the best session of the day by far. Excellent insight and communication.]

Most data, and the decisions based on it, has been predicted on the male experience.

Le Corbusier defined the generic human as a 6ft British police detective, as the archetype to design buildings for. Rejected the female body as too unharmonious.

Voice recognition software is 70% more accurate for men. 70% of the sample databases are male.

Car crash test dummies for decades were only male. The female ones used now are just scaled down male ones. 2015 EU regulations only said that female crash dummies should be used in 1/5 tests, and only in the passenger seat. Women are 47% more likely to be injured in a car crash and 17% more likely to die.

Medical diagrams generally centre the male body, and then have the female body as little extracts on the side. Female body seen as a deviant from the (male) standard.

Yes, the menstrual cycle is a complicating factor. So you need to study it! Heart medication and antidepressants are affected by it.

How many treatments might we have ruled out because they didn’t work on men, but might work on women but we never researched them because they didn’t work on the default male body?

Young women are almost twice as likely as men to die of heart problems in hospital.

Machine learning amplifies our biases.
A 2017 study on image labelling algorithms found that pictures involving cooking were 33% more likely to be categorised as women.

When thinking about different types of use of transport, the way that you classify different types of travel is important. If you don’t bundle ‘care’ together as a category, you can undersell its importance relative to employment-relate travel. In general, we undervalue women’s unpaid care work. You should collect sex aggregated data. Be careful of not doing this by proxy.

Women tend to assess their intelligence accurately. Men of average intelligence think they’e more intelligent than 2/3 of people.

Equality doesn’t mean treating women like men. Men are not the standard that women fail to live up to. Don’t fall into this when you try to fix inequality.

Diversity is the best fix for this sort of thing.

Intersectionality is even more of a problem, but wasn’t the focus of this session. 

John Sheridan, Digital Director at the National Archives

Context in which data was created is important. 

Good quality URLs essential to data infrastructure

Good quality processes for changing. Understanding user needs better and improving the data.

Manit Chander on information sharing in the maritime industry

In maritime industry, information sharing has been fragmented, and data classification not standardised. 

HiLo gets internal near-miss data, does predictive risk modelling, and produces risk analysis and good practice.

They get messy data shared with them and then tidy it up at their end.

They produce simple, easy-to-apply, non-judgmental insights.

They focus on building trust as the most important thing to sustain the community.
The people providing the data are the key group here.

People will share their information if they can see value to them.

  • Reduced risk of lifeboat incidents by 72%
  • Reduced engine room fires by 62%
  • Reduced risk of bunker spills by 25%

Sustainability and the climate change emergency

Notes from an event at the Royal Geographical Society, 9 October 2019. Using data to build public and decision-maker awareness of climate change. (My sense was actually that the event showed that stories are more powerful than data in getting people to care about this kind of thing)

Sophie Adams, Ofgem

Ofgem is working to decarbonise the energy system.

They’ve been working to make their data machine readable. They’ll then publish it on their data hub, through the Energy Data Exchange.

They’re taking in information from the Met Office and matching it up with price changes over time, to see the impact that weather has on energy prices.

17-18 Jan 2020 – Ofgem and valtech will co-host a hackathon on visualising environmental data. It’ll ask questions like “How could we decarbonise the UK in 5 years?”

Jo Judge, National Biodiversity Network

They get data in lots of different formats. Converting this into something consistent and usable is a challenge. Encouraging people to use this biodiversity data also takes work. Their State of Nature report visualises and summarises some of this data.

Philip Taylor, Open Seas

Mapping cod volumes and fishing locations over time, using publicly-available data, provokes conversations about management about this resource. (Of course I disagree with this conception of these creatures as a resource.)

Open Seas tries to take data and turn it into public awareness and better decision-making. They also use data to spot illegal fishing.

Using boat beacon data along with geospatial data on protected areas to spot boats that have fished illegally.

Chris Jarvis, Environment Agency

The Environment Agency use data to create UK Climate Projections, looking at the impact that change will have on weather. They’re working on linked data to allow their datasets to be built up in useful ways.

We used to think about flood defence. That’s not viable any more – we now think about resilience. The Environment Agency want to build a “nation of climate change champions” – people who know what’s happening, the risks and impact on them and what they can do.

2/3 of the 5m million people whose homes are at risk of flooding are unaware.

The Environment Agency are great at flood forecasting. Data collected up to every 15 minutes. They collect this over time, and make this over time.

This data is available through API. There’s information on hydrology and flood monitoring, including flood areas.

Aside: The Met Office’s data might also be interesting and useful. Information on their products and hourly site-specific observations.

Ben Rewis, Save the Waves

Dirty Wave challenge – crowdsourced data on dirty beaches, with an incentive to take action.

Users take a photo, tag it to their geolocation, and classify the type of problem that it relates to.

Advice for January hackathon: convene a set of people with shared values. Use technology to add value in some way. Get standards to encourage reuse and interoperability. Connect shared communities to a bigger picture. You might either get people to passively add data, or to interrogate, curate and work with what is already there.

Build a food bank API – part 1

I’m going to try and build an API that tells you the items needed by nearby foodbanks.

An API is a tool that lets you quickly interface between bits of technology. If a tool has an API, it means that web developers can quickly interact with it: either taking information out of it or sending information or instructions to it. Using an API to interact with a bit of software is generally quick and easy, meaning that you can spend your time and energy working on doing something special and interesting with the interaction, rather than spending your effort working out how to get the things to talk to each other in the first place. Twitter has an API which lets you search, view or post tweets; Google Maps has an API that lets you build maps into your website or software. I built a tool around the twitter API a few years ago and found it a real thrill.

The idea for this API came from Steve Messer. I haven’t worked on a creative/web development project for about a year, and I’ve been feeling eager to take one on. I know that I learn a lot working on a personal project. I also experience a fantastic sense of flow.

Inspired by the Weeknotes movement, I’m going to write a series of blog posts about how I get on.

Goal for the project

Make an API that, for a given geolocation, returns the nearest 3 foodbanks, with a list of the items that they need.

How I’m approaching the work

I’m going to focus on the Trussell Trust, as they have a large national network of foodbanks – whose websites seem to work in the same way.

I’m starting by testing some risky assumptions. If these assumptions turn out to be wrong, I might not be able to meet my goal. So I want to test them as soon as I can.

Currently known risky assumptions

  • If I know the URL of a given foodbank’s page on food donations, I can work out what items they need.
  • All Trussell Trust foodbanks follow the same way of organising their websites
  • All Trussell Trust foodbanks follow the same way of describing the items they need.
  • I can access or somehow generate a comprehensive and accurate list of all Trussell Trust foodbanks
  • If I have a list of Trussell Trust foodbanks I can straightforwardly work out the URLs of their pages describing the items they need
  • I can scrape the information I need from the relevant server/servers in a courteous way
  • It won’t be very difficult to build a data representation of food banks and required items, or to store this in an appropriate database.
  • Building and running the API won’t be too much fuss. (Or, less concisely: It’s possible to build a lightweight, modern infrasturcture to host a database for this API and serve requests without too much complexity or cost.)

Side challenge

Can I host this API in a way that is carbon neutral or, even better, renewably-hosted?

If I can’t, can I at least work out how much it’s polluting and offset it somehow?

What next

I’m going to start by working on the first risky assumption – “If I know the URL of a given foodbank’s page on food donations, I can work out what items they need.”

Read part 2 of this project to find out what I did next.

Audio Experience Design

Dr Lorenzo Picinali, Senior Lecturer in Audio Experience Design at Imperial College London, visited GOV.UK to talk about his work. He works on acoustic virtual and augmented reality. He’s recently worked on 3D binaural sound rendering, spatial hearing, interactive applications for visually impaired people, hearing aids technologies, audio and haptic interaction.

Vision contains much more information than sound. If there’s audio and visual input, our brains generally prioritise the visual.

e.g. the McGurk illusion: visual input shapes our understanding of sound.

https://www.youtube.com/watch?v=G-lN8vWm3m0

Echo location. This blind man gets information on size, layout, texture and density by making a clicking noise and listening to the echoes. He trained his brain to better localise echoes.

(To learn more, check out this episode of the Invisibilia podcast)

In some contexts sound is better than vision:

  • It’s 360 degrees. You don’t have to be looking at it.
  • It’s always active. e.g. good for alarms.
  • Occlusions don’t make objects inaudible. (You can often hear things even if there’s another object in the way, whereas line of sight is generally blocked by other objects.)
  • Our brain is really good at comparing sound signals
  • We’re better at memorising tonal sequences than visual sequences.

Examples of good interfaces that use sound:

  • Sound can be useful to give people information in busy situations. e.g. a beeping noise to help you reverse park.
  • Music to help pilots fly level at night. With this interface, the left or right volume would change if the plane was tilting, and the pitch would go up or down if the plane was pointing up or down. This worked really well.
  • A drill for use in space. Artificial sound communicated speed and torque.

Acoustic augmented reality is a frontier that hasn’t been explored yet. We can match the real world and the virtual world more convincingly than with visual elements of augmented reality, where it’s quite clear that they aren’t real.

Our ears are good at making sense of differences in volume and the time that sound reaches them. This lets us work out where in space sounds are coming from. Our binaural audio processing skills mean that we can create artificial 3d soundscapes.

https://imperialcollegelondon.app.box.com/s/3ki1gg770nmzhykhvnqzzdd7ipx6ioem

Plugsonic – a platform that lets you create 3d soundscapes on the web using your own sound file and pictures.

Open standards- a cross government technical architecture workshop

My notes from a cross-government Technical Architecture community workshop on 29 July, hosted at Government Digital Service.

Open Standards are publicly-available agreements on how technology will work to solve a particular problem, or meet a particular need.

The Open Data Institute has a useful definition of standards, open standards and open data.

Open Standards are good for laying the foundations for cooperation in technology, as they allow people to work in a consistent way. e.g. HTML is an open standard, which means that everyone can build and access web pages in the same way.

As technology develops, the standards can be updated, allowing innovation in a way that retains the benefits of interoperability.

How GDS works with Open Standards – Dr Ravinder Singh, Head of Open Standards, Government Digital Service

GDS outlines the Open Standards it supports. You can suggest standards that should exist. You’ll be asked 47 assessment questions. If a proposal comes out of that, GDS will take this to the Open Standards Board, which meets twice a year. The new Open Standard will be published on GOV.UK if it’s adopted. It’ll be incorporated into Service Assessments and the Technology Code of Practice.

PDFs are still the most frequently uploaded filetype on GOV.UK. So there’s a long way to go in making HTML and other open standards the default. (Why content should be published in HTML not PDF)

Supporting the adoption of open standards – Leigh Dodds, Open Data Institute (ODI)

CSVW lets you add metadata describing structure and schema

Open Standards for Data – ODI microsite

“Open standards for data are reusable agreements that make it easier for people and organisations to publish, access, share and use better quality data.”

ODI have produced a canvas to help you think about researching and designing a standard. The technical bit is the easy bit – the hard bit is getting people to agree on things.

Some advice if you’re building a new open standard:

  • Don’t just dive in to the technology rather than understanding the problem
  • Invest time in getting people to agree
  • Invest time in adoption. Don’t just do the specification. You need guidance training, tools, libraries.
  • Focus on the value you’re trying to bring – not, just the standard as an end in itself.
  • If you think you want a standard, be clear what type of standard you mean. Types of standard include:
    • Definitions
    • Models
    • Identifiers
    • Taxonomies
    • File formats
    • Schemas
    • Data transfer
    • code of practice
    • Data types
    • Units and measures
    • How we collect data

Opportunities for adopting open standards in government

Some thoughts from my group:

Schemas for consistent transparency publishing on data.gov.uk. Currently lots of datasets are published in a way that doesn’t allow you to compare between them. e.g. if you are comparing ‘spend above £25k’ data between councils, at the moment this isn’t interoperable because it’s structured in different ways. If all this data was published according to a consistent structure, it would be much easier to compare.

Shared standard for technical architecture documentation. This would make it easier for people to understand new things.

Do voice assistants have an associated standard? Rather than publishing different (meta-)data for each service – e.g. having a specific API for Alexa – it would be better for all of these assistants to consume content/data in a consistent way.

The (draft) future strategy for GOV.UK involves getting a better understanding of how services are performing across the whole journey, not just the part that is on GOV.UK. Could standards help here?

Kate Manne: Down Girl – Summary

Patriarchy is supported by misogyny and sexism

Misogyny is a system of hostile forces that polices and enforces patriarchal order.

Sexism: “the branch of patriarchal ideology that justifies and rationalises a patriarchal social order”
Belief in men’s superiority and dominance.

Misogyny: “the system that polices and enforces [patriarchy’s] governing norms and expectations”
Anxiety and desire to maintain patriarchal order, and commitment to restoring it when disrupted.

A reduction in sexism in a culture might lead to an increase in misogyny, as “women’s capabilities become more salient and hence demoralizing or threatening”

Women are expected to fulfil asymmetrical moral support roles

Women are supposed to provide these to men:

  • attention
  • affection
  • admiration
  • sympathy
  • sex
  • children
  • (social, domestic, reproductive and emotional labour
  • mixed goods, like safe haven, nurture, security, soothing and comfort

Goods that are seen as men’s prerogative:

  • power
  • prestige
  • public recognition
  • rank reputation
  • honor
  • ‘face’
  • respect
  • money and other forms of wealth
  • hierarchical status
  • upward mobility
  • the status conferred by having a high-ranking woman’s loyalty, love, devotion etc

If women try to take masculine-coded goods, they can be treated with suspicion and hostility.

There are lots of “social scripts, moral permissions, and material deprivations that work to extract feminine-coded goods from here” – such as:

  • anti-choice movement
  • cat-calling
  • rape culture

There are lots of mechanisms to stop women from taking masculine-coded statuses – such as:

  • testimonial injustice
  • mansplaining
  • victim-blaming

An example of this asymmetric moral economy:

“Imagine a person in a restaurant who expects not only to be treated deferentially – the customer always being right – but also to be served the food he ordered attentively, and with a smile. He expects to be made to feel cared for and special, as well as to have his meal brought to him (a somewhat vulnerable position, as well as a powerful one, for him to be in). Imagine now that this customer comes to be disappointed – his server is not serving him, though she is waiting on other tables. Or perhaps she appears to be lounging around lazily or just doing her own thing, inexplicably ignoring him. Worse, she might appear to be expecting service from him, in a baffling role reversal. Either way, she is not behaving in the manner to which he is accustomed in such settings. It is easy to imagine this person becoming confused, then resentful. It is easy to imagine him banging his spoon on the table. It is easy to imagine him exploding in frustration.”

Praise, as well as hostility, enforces patriarchy

“We should also be concerned with the rewarding and valorizing of women who conform to gendered norms and expectations, in being (e.g.) loving mothers, attentive wives, loyal secretaries, ‘cool’ girlfriends, or good waitresses.”

Misogyny is not psychological

Misogyny isn’t a psychological phenomenon. It’s a “systematic facet of social power relations and a predictable manifestation of the ideology that governs them: patriarchy.”

Misogyny is banal. (“to adapt a famous phrase of Hannah Arendt’s)

This understanding of misogyny is intersectional

Misogyny is mediated through other systems of privilege and vulnerability. Manne does not assume some universal experience of misogyny.

Shout out to “The Master’s Tools Will Never Dismantle the Master’s House” critiquing middle class heterosexual white women over-generalising on the basis of their experience.

A quick note on privilege

Privileged people “tend to be subject to fewer social, moral, and legal constraints on their actions than their less privileged counterparts”


What’s new in the new Service Standard

The Government Digital Service recently launched a new version of the Service Standard. What’s changed?

  • It’s now called the Service Standard, not the Digital Service Standard. This reflects the desire to create end-to-end services. This is better than creating digital services, and then (if you’re lucky) considering assisted digital as an afterthought. People are encouraged to provide a joined up experience across channels. What’s the user experience like if a user phones or emails you?
  • Removed the requirement to tell everyone to use the digital service. Because digital isn’t always the right channel. And there’s already a financial imperative encouraging service owners to encourage people to shift to digital. So we didn’t need to push that any more. Instead, we need to encourage people to think more broadly about the service, not just the digital part.
  • Focus on solving a whole problem for users, not just a part of it. The Standard encourages people to ask if the service is part of a wider journey. e.g. business tax registration is probably part of a broader journey of starting a business. So you should join up with those services too.
  • The team have added more information on why the Service Standard expects certain things, and the benefits of following the Standard. So it’s less doctrinaire and encourages people to do the right thing.
  • People are challenged to go beyond just thinking about accessibility, and to think about inclusion more generally: e.g. trans people and same sex relationships.
  • The type of approach to meeting user needs is challenged. Is the service the right way to deliver user needs? Or should you publish content or make data available via an API instead?
  • The scope of the service is questioned. If it’s too broad or too narrow it’s a problem.
  • Removed the requirement to test with the minister.