Don’t blame mutant algorithms
Governments need to get better at navigating complex decisions, not shirk responsibility when things go wrong
Hi, it's Chris from the Tony Blair Institute. In this edition we’re exploring the increasing use of algorithms in government, and what policymakers need to do to get this right.
📲 If you enjoy this post please take a moment to share it on social media or forward it to someone who might find it interesting. Thank you!
🙋 Were you forwarded this issue? Subscribe now:
By Kirsty Innes and Chris Yiu
Earlier this month, the UK government announced that this year’s A-level and GCSE exams in England will be cancelled and replaced with an alternative assessment mechanism, after causing a fiasco last year by trying to use an algorithm to generate exam grades.
In fairness to the government, there were no good options available last year — nothing could have satisfactorily replaced exams. But trying to synthesise academic outcomes using data about past years’ students was particularly wrong-headed and futile. Students need a chance to prove their individual merit — denying them that just replicates and entrenches the inequalities that beset previous years’ cohorts.
The A-levels mess showed how painfully wrong things can go when governments subject citizens to an automated decision-making process when they should have had agency, if not control. The upshot of this and a handful of other high-profile controversies (like the home-building formula) has been that algorithms have become a bogeyman, a byword for callous, impersonal systems that prioritise cost-cutting and efficiency over people.
Press coverage regularly refers to “mutant” algorithms or questions whether algorithms can be “trusted”. This anthropomorphism makes for catchy copy, but it obscures the (admittedly complicated) reality of how governments use algorithms. At worst, it can make policymakers and politicians shy away from technology that could be highly beneficial.
No process is perfect
Algorithms form the basis of computing processes and programmes with such a wide range of uses that it makes little sense to think of them as a single class. It would be a shame if a generally low-quality debate around the issue made governments less willing to use algorithms, or less open about decision-making processes — automated or otherwise.
As the foundations of AI and machine learning, algorithms have huge potential to help governments design and deliver better public services. Many potential applications, like monitoring the performance of public services, or helping to understand trends or model the impact of policies, are fairly uncontroversial. And there are obvious advantages: algorithms can help personalise services so they better meet citizens’ needs, or process huge volumes of data quickly, freeing up public servants’ time for more difficult or sensitive tasks.
Algorithms can also help to take decisions, both about policy in general, and about individuals. On the face of it this seems like a controversial statement, partly because of the unflattering “computer says no” stereotype that has become so familiar. But the real issue is not whether an algorithm is involved, but the quality of decision-making more generally.
Every day, officials and public servants take thousands of decisions that materially impact peoples’ lives — think about border force agents, job centre staff or the receptionist at your local doctors’ surgery, to name a few. The decisions they take can be guided by personal judgement and experience, formal or informal guidelines, or strict criteria, systems and processes.
No decision-making process is perfect: setting objective criteria can mean that the process doesn’t take into account special circumstances. Human judgement inevitably brings in an element of inconsistency and bias. What’s important is that the processes that governments and public bodies use to take decisions are well-designed, transparent, and challengeable — especially when it comes to decisions about specific individuals.
Look forward, not back
Four things can help make decision-making processes better. First, they should be well-designed, in consultation with experts and service users (and needless to say, the greater the impact on peoples’ lives, the more time and effort should be devoted to getting it right; tools like the Consequence Scanning manual from Doteveryone can be helpful for this kind of work). Second, there needs to be transparency about how the decision will be reached. Third, it needs to be challengeable by the people affected; and fourth, the outcomes of the process should be monitored to check what impact it is having. This holds whether the decision is made by a person, an algorithm or both.
In a world where public services sometimes lag years behind the private sector in quality and efficiency, and where governments have limited resources, it makes sense to use every tool available to improve policy making and public service delivery. Where departments and delivery bodies get it wrong, they should be held to account. But condemning algorithms as inherently dangerous isn’t helpful, and risks casting a chill on much-needed efforts to bring public services up to date.
The way the UK government approached last year’s A-level results resulted in a spectacularly bad outcome. But we shouldn’t let this cast a pall over use of this technology in government for evermore. The briefing note we published this week provides a guide for policymakers looking to do better — unpacking the ways algorithms can support more effective government, and what good looks like in world that is getting more complex by the day.