Kin is General API Evangelist, and Chief Evangelist at Postman
You should follow a design process when creating an API.
That way you can tease out assumptions, and test value before carrying out technical development work. Iteration is quickest and cheapest when you do it before you write any code!
Start off by defining an endpoint for the API, the values that you’d want to send to it, and giving an example of what you expect it to return.
You can use tools like Postman, and publish this test API, giving you an endpoint for testing, and letting people try working with it.
With OpenAPI, Swagger and Postman, you can publish your documentation from the code.
Some things your API should have:
A choice of response format. Don’t just give JSON – let people receive CSVs if they want them. This makes things more open to non-developers.
A management layer: access keys, rate limits. Apigee, Tyk,io, Mulesoft and Kong are tools for this.
Automated testing in your Continuous Integration / Continuous Deployment pipeline
A clear point of contact for support
A plan for communications. Announce your API and new versions. Explain the purpose and what’s changing. You should have a comms strategy around every release. Without evangelists and communications, your API won’t last.
Create a clear, professional homepage (rather than just a line of text!), so that new users have an idea of the purpose of the API, and so that it looks credible.
Produce documentation on how to use the API, so that developers understand how to interact with it.
Create a web page that uses the API, taking a user’s location and showing them the nearest foodbanks and what they need. Having built the API, this feels like a natural next step. Anyone who goes to this page will be able to find out which food banks are near them, and what items they need.
Tell people about it, so that developers can start using the API, and people can start using the service to find out what items their local foodbanks need them to donate. We’ll have two minimum viable products – one API, and one human-facing service – and it’ll be time to find out if there’s any interest in using them.
I’ll be collaborating to make the above happen, which is exciting! We’ll be doing some user testing as well, to see how people use the API and documentation.
My goal was “Make an API that, for a given geolocation, returns the nearest 3 foodbanks, with a list of the items that they need.”
I’ve achieved this for something running locally (i.e. just on my computer, but not on a web server that anyone could access). You can download the code and follow the instructions to run it yourself, if you have the Python programming language installed on your computer. I actually went slightly further than planned – you can specify the number of foodbanks you want to see, and you can also find out the items needed by a given named foodbank.
The next step is to get it running online so that anyone can use it.
If I have a list of Trussell Trust foodbanks I can straightforwardly work out the URLs of their pages describing the items they need. Mostly, yes. I’ve written code to do this.
I can scrape the information I need from the relevant server/servers in a courteous way Not sure yet. I assume all of the Trussell Trust’s standard sites are hosted on a single web server. I make a single GET request to get the names and URLs of all the foodbanks, but each ‘items needed’ page is a separate request. I’ve included a pause between each request, but I don’t know if it’s too long or too short.
It won’t be very difficult to build a data representation of food banks and required items, or to store this in an appropriate database. This was quite straightforward. And I didn’t even need a database, as I’m going to hold all the information in memory and not manipulate it.
Building and running the API won’t be too much fuss. (Or, less concisely: It’s possible to build a lightweight, modern infrasturcture to host a database for this API and serve requests without too much complexity or cost.) I’ve built an API that runs locally. Hosting it online as a real webserver should be reasonably straightforward. That’s the next step.I’ve found an entirely-renewably-powered web host, which might help me meet my extra goal of running this API entirely renewably..
A summary of the sessions I attended at the Open Data Institute’s Summit on 12 November 2019
Tim Berners-Lee and Nigel Shadbolt, interviewed by Zoe Kleinman
Tim Berners-Lee described commercial advertising as “win-win”, because targeted advertising is more relevant. But “political advertising is very different… people are being manipulated into voting for things which are not in their best interests.”
Nigel Shadbolt: There’s a risk that people just move on to new shiny things. Creating a common data infrastructure is unfinished business.
Berners-Lee: We should be able to choose where our data is shared, rather than it just being impossible because systems can’t speak to each other. “You can share things with people that you want to share it with to get your life done.”
Shadbolt: Data sharing has to be consensual. Public data shouldn’t be privatised. We need transparency and accountability of algorithms used to make decisions on the basis of data. Platform providers are controlling and tuning the algorithms.
Berners-Lee: How might we train algorithms to feed us news that optimises for ‘aha’ connection moments, rather than feelings of revulsion?
Kriti Sharma – Can AI create a fairer world?
If you’re building tools with data, the biases of that data are perpetuated and potentially amplified, which can worsen existing inequalities. e.g. access to credit or benefits, or deciding who gets job interviews.
Early on in a design process, think about how things could go wrong.
Train machine learning or AI on more diverse datasets.
An MIT test of facial recognition found an error rate of 1% with white-skinned men. For darker skinned women, the error rate was 35%.
Build diverse teams. Only 12% of the workforce on AI and machine learning are women. A more diverse team is more likely to question and correct biases.
Data Pitch accelerator
A EU funded accelerator, connecting public and private sectors to create some new data-driven products and services. A 3-year project.
28 data challenges, 13 countries.
4.6 million euros invested 14.8 million euros “value unlocked” – additional sales, investment and efficiencies. These are actual numbers, not optimistic forecasts.
How do we cultivate open data ecosystems?
Richard Dobson, Energy Systems Catapult Leigh Dodds, Open Data Institute Rachel Rank, Chief Exec, 360 Giving Huw Davies, Ecosystem Development Director, Open Banking
Energy Systems Catapult: If you want to move to renewable energy, you need to know what’s produced, where, and when.
So BEIS, through a Catapult scheme, set up a challenge on this. Seamless data sharing was crucial.
360 Giving: Help grant makers open up their grant data in an open format so people can see who is funding what, why, and how much.
Open Banking: Catalysed by regulation from the Competition and Markets authority. UK required largest banks to fund an implementation entity, to make sure it was effective and standards-driven to set up a thriving ecosystem. So they worked on standards for consent and security. Every 2 months the ecosystem doubles in size.
When encouraging people to contribute to an ecosystem, show value, don’t tell people about it. Don’t talk to people about organisational identifiers. Show them why you can’t see their grants alongside the other grants because they haven’t been collecting these. People had such low insight into what other people were funding, that this was very compelling. Make people feel left out if they aren’t sharing their data.
Thoughts on making a healthy ecosystem:
You need standards for an ecosystem to scale
Accept that even with common standards and APIs you’ll get a few different technical service providers emerge, then people emerge who add value on top of this. (This was the experience in Open Banking)
“You can’t over-emphasise the importance of good facilitation at the heart of the ecosystem” (I took this as: you need investment from somewhere to make this collaboration happen) Open Banking did lots of work to collaboratively set up standards that everyone bought into. And they did lots of work facilitating and matchmaking to get people working together, to understand each other and provide more value.
Need to move away from just thinking about publishers and consumers. Think about the ecosystem more widely.
“When great stuff happens, shine a light on it and celebrate it”
Don’t pre-empt your users. They’ll surprise you.
Work out a way to police/protect data quality without having a single point of failure
Don’t aim for perfection, aim for progress Start with what you’ve got. Perfect data doesn’t exist.
Caroline Criado Perez – Invisible Women: exposing data bias in a world designed for men
[This was the best session of the day by far. Excellent insight and communication.]
Most data, and the decisions based on it, has been predicted on the male experience.
Le Corbusier defined the generic human as a 6ft British police detective, as the archetype to design buildings for. Rejected the female body as too unharmonious.
Voice recognition software is 70% more accurate for men. 70% of the sample databases are male.
Car crash test dummies for decades were only male. The female ones used now are just scaled down male ones. 2015 EU regulations only said that female crash dummies should be used in 1/5 tests, and only in the passenger seat. Women are 47% more likely to be injured in a car crash and 17% more likely to die.
Medical diagrams generally centre the male body, and then have the female body as little extracts on the side. Female body seen as a deviant from the (male) standard.
Yes, the menstrual cycle is a complicating factor. So you need to study it! Heart medication and antidepressants are affected by it.
How many treatments might we have ruled out because they didn’t work on men, but might work on women but we never researched them because they didn’t work on the default male body?
Young women are almost twice as likely as men to die of heart problems in hospital.
Machine learning amplifies our biases. A 2017 study on image labelling algorithms found that pictures involving cooking were 33% more likely to be categorised as women.
When thinking about different types of use of transport, the way that you classify different types of travel is important. If you don’t bundle ‘care’ together as a category, you can undersell its importance relative to employment-relate travel. In general, we undervalue women’s unpaid care work. You should collect sex aggregated data. Be careful of not doing this by proxy.
Women tend to assess their intelligence accurately. Men of average intelligence think they’e more intelligent than 2/3 of people.
Equality doesn’t mean treating women like men. Men are not the standard that women fail to live up to. Don’t fall into this when you try to fix inequality.
Diversity is the best fix for this sort of thing.
Intersectionality is even more of a problem, but wasn’t the focus of this session.
John Sheridan, Digital Director at the National Archives
Context in which data was created is important.
Good quality URLs essential to data infrastructure
Good quality processes for changing. Understanding user needs better and improving the data.
Manit Chander on information sharing in the maritime industry
In maritime industry, information sharing has been fragmented, and data classification not standardised.
HiLo gets internal near-miss data, does predictive risk modelling, and produces risk analysis and good practice.
They get messy data shared with them and then tidy it up at their end.
They produce simple, easy-to-apply, non-judgmental insights.
They focus on building trust as the most important thing to sustain the community. The people providing the data are the key group here.
People will share their information if they can see value to them.
Notes from an event at the Royal Geographical Society, 9 October 2019. Using data to build public and decision-maker awareness of climate change. (My sense was actually that the event showed that stories are more powerful than data in getting people to care about this kind of thing)
Sophie Adams, Ofgem
Ofgem is working to decarbonise the energy system.
They’ve been working to make their data machine readable. They’ll then publish it on their data hub, through the Energy Data Exchange.
They’re taking in information from the Met Office and matching it up with price changes over time, to see the impact that weather has on energy prices.
17-18 Jan 2020 – Ofgem and valtech will co-host a hackathon on visualising environmental data. It’ll ask questions like “How could we decarbonise the UK in 5 years?”
Jo Judge, National Biodiversity Network
They get data in lots of different formats. Converting this into something consistent and usable is a challenge. Encouraging people to use this biodiversity data also takes work. Their State of Nature report visualises and summarises some of this data.
Philip Taylor, Open Seas
Mapping cod volumes and fishing locations over time, using publicly-available data, provokes conversations about management about this resource. (Of course I disagree with this conception of these creatures as a resource.)
Open Seas tries to take data and turn it into public awareness and better decision-making. They also use data to spot illegal fishing.
Chris Jarvis, Environment Agency
The Environment Agency use data to create UK Climate Projections, looking at the impact that change will have on weather. They’re working on linked data to allow their datasets to be built up in useful ways.
We used to think about flood defence. That’s not viable any more – we now think about resilience. The Environment Agency want to build a “nation of climate change champions” – people who know what’s happening, the risks and impact on them and what they can do.
2/3 of the 5m million people whose homes are at risk of flooding are unaware.
The Environment Agency are great at flood forecasting. Data collected up to every 15 minutes. They collect this over time, and make this over time.
Users take a photo, tag it to their geolocation, and classify the type of problem that it relates to.
Advice for January hackathon: convene a set of people with shared values. Use technology to add value in some way. Get standards to encourage reuse and interoperability. Connect shared communities to a bigger picture. You might either get people to passively add data, or to interrogate, curate and work with what is already there.
I’m going to try and build an API that tells you the items needed by nearby foodbanks.
An API is a tool that lets you quickly interface between bits of technology. If a tool has an API, it means that web developers can quickly interact with it: either taking information out of it or sending information or instructions to it. Using an API to interact with a bit of software is generally quick and easy, meaning that you can spend your time and energy working on doing something special and interesting with the interaction, rather than spending your effort working out how to get the things to talk to each other in the first place. Twitter has an API which lets you search, view or post tweets; Google Maps has an API that lets you build maps into your website or software. I built a tool around the twitter API a few years ago and found it a real thrill.
All Trussell Trust foodbanks follow the same way of organising their websites
All Trussell Trust foodbanks follow the same way of describing the items they need.
I can access or somehow generate a comprehensive and accurate list of all Trussell Trust foodbanks
If I have a list of Trussell Trust foodbanks I can straightforwardly work out the URLs of their pages describing the items they need
I can scrape the information I need from the relevant server/servers in a courteous way
It won’t be very difficult to build a data representation of food banks and required items, or to store this in an appropriate database.
Building and running the API won’t be too much fuss. (Or, less concisely: It’s possible to build a lightweight, modern infrasturcture to host a database for this API and serve requests without too much complexity or cost.)
Can I host this API in a way that is carbon neutral or, even better, renewably-hosted?
If I can’t, can I at least work out how much it’s polluting and offset it somehow?
Dr Lorenzo Picinali, Senior Lecturer in Audio Experience Design at Imperial College London, visited GOV.UK to talk about his work. He works on acoustic virtual and augmented reality. He’s recently worked on 3D binaural sound rendering, spatial hearing, interactive applications for visually impaired people, hearing aids technologies, audio and haptic interaction.
Vision contains much more information than sound. If there’s audio and visual input, our brains generally prioritise the visual.
e.g. the McGurk illusion: visual input shapes our understanding of sound.
It’s 360 degrees. You don’t have to be looking at it.
It’s always active. e.g. good for alarms.
Occlusions don’t make objects inaudible. (You can often hear things even if there’s another object in the way, whereas line of sight is generally blocked by other objects.)
Our brain is really good at comparing sound signals
We’re better at memorising tonal sequences than visual sequences.
Examples of good interfaces that use sound:
Sound can be useful to give people information in busy situations. e.g. a beeping noise to help you reverse park.
Music to help pilots fly level at night. With this interface, the left or right volume would change if the plane was tilting, and the pitch would go up or down if the plane was pointing up or down. This worked really well.
A drill for use in space. Artificial sound communicated speed and torque.
Acoustic augmented reality is a frontier that hasn’t been explored yet. We can match the real world and the virtual world more convincingly than with visual elements of augmented reality, where it’s quite clear that they aren’t real.
Our ears are good at making sense of differences in volume and the time that sound reaches them. This lets us work out where in space sounds are coming from. Our binaural audio processing skills mean that we can create artificial 3d soundscapes.
Open Standards are good for laying the foundations for cooperation in technology, as they allow people to work in a consistent way. e.g. HTML is an open standard, which means that everyone can build and access web pages in the same way.
As technology develops, the standards can be updated, allowing innovation in a way that retains the benefits of interoperability.
How GDS works with Open Standards – Dr Ravinder Singh, Head of Open Standards, Government Digital Service
Some advice if you’re building a new open standard:
Don’t just dive in to the technology rather than understanding the problem
Invest time in getting people to agree
Invest time in adoption. Don’t just do the specification. You need guidance training, tools, libraries.
Focus on the value you’re trying to bring – not, just the standard as an end in itself.
If you think you want a standard, be clear what type of standard you mean. Types of standard include:
code of practice
Units and measures
How we collect data
Opportunities for adopting open standards in government
Some thoughts from my group:
Schemas for consistent transparency publishing on data.gov.uk. Currently lots of datasets are published in a way that doesn’t allow you to compare between them. e.g. if you are comparing ‘spend above £25k’ data between councils, at the moment this isn’t interoperable because it’s structured in different ways. If all this data was published according to a consistent structure, it would be much easier to compare.
Shared standard for technical architecture documentation. This would make it easier for people to understand new things.
Do voice assistants have an associated standard? Rather than publishing different (meta-)data for each service – e.g. having a specific API for Alexa – it would be better for all of these assistants to consume content/data in a consistent way.
The (draft) future strategy for GOV.UK involves getting a better understanding of how services are performing across the whole journey, not just the part that is on GOV.UK. Could standards help here?
Misogyny is a system of hostile forces that polices and enforces patriarchal order.
Sexism: “the branch of patriarchal ideology that justifies and rationalises a patriarchal social order” Belief in men’s superiority and dominance.
Misogyny: “the system that polices and enforces [patriarchy’s] governing norms and expectations” Anxiety and desire to maintain patriarchal order, and commitment to restoring it when disrupted.
A reduction in sexism in a culture might lead to an increase in misogyny, as “women’s capabilities become more salient and hence demoralizing or threatening”
Women are expected to fulfil asymmetrical moral support roles
Women are supposed to provide these to men:
(social, domestic, reproductive and emotional labour
mixed goods, like safe haven, nurture, security, soothing and comfort
Goods that are seen as men’s prerogative:
money and other forms of wealth
the status conferred by having a high-ranking woman’s loyalty, love, devotion etc
If women try to take masculine-coded goods, they can be treated with suspicion and hostility.
There are lots of “social scripts, moral permissions, and material deprivations that work to extract feminine-coded goods from here” – such as:
There are lots of mechanisms to stop women from taking masculine-coded statuses – such as:
An example of this asymmetric moral economy:
“Imagine a person in a restaurant who expects not only to be treated deferentially – the customer always being right – but also to be served the food he ordered attentively, and with a smile. He expects to be made to feel cared for and special, as well as to have his meal brought to him (a somewhat vulnerable position, as well as a powerful one, for him to be in). Imagine now that this customer comes to be disappointed – his server is not serving him, though she is waiting on other tables. Or perhaps she appears to be lounging around lazily or just doing her own thing, inexplicably ignoring him. Worse, she might appear to be expecting service from him, in a baffling role reversal. Either way, she is not behaving in the manner to which he is accustomed in such settings. It is easy to imagine this person becoming confused, then resentful. It is easy to imagine him banging his spoon on the table. It is easy to imagine him exploding in frustration.”
Praise, as well as hostility, enforces patriarchy
“We should also be concerned with the rewarding and valorizing of women who conform to gendered norms and expectations, in being (e.g.) loving mothers, attentive wives, loyal secretaries, ‘cool’ girlfriends, or good waitresses.”
Misogyny is not psychological
Misogyny isn’t a psychological phenomenon. It’s a “systematic facet of social power relations and a predictable manifestation of the ideology that governs them: patriarchy.”
Misogyny is banal. (“to adapt a famous phrase of Hannah Arendt’s)
This understanding of misogyny is intersectional
Misogyny is mediated through other systems of privilege and vulnerability. Manne does not assume some universal experience of misogyny.
Shout out to “The Master’s Tools Will Never Dismantle the Master’s House” critiquing middle class heterosexual white women over-generalising on the basis of their experience.
A quick note on privilege
Privileged people “tend to be subject to fewer social, moral, and legal constraints on their actions than their less privileged counterparts”