Algorithms have the potential to wreck lives. Take, for instance, the fiasco surrounding 2020’s A-level results in England and Wales, when many thousands of students who’d been unable to sit their exams rebelled against the unfair grades they’d been assigned by a flawed algorithm.
Yet algorithm-based artificial intelligence (AI) systems are powerful tools that could radically improve the work of many public bodies. They offer new possibilities for the delivery of many services, advances in healthcare research, efficiencies in the labour market and the personalisation of online services.
According to the Ada Lovelace Institute, an independent research group that monitors the use of data and AI, algorithmic decision-making systems are being deployed at an unprecedented speed in both the business world and the public sector. They are becoming ubiquitous, embedded in everyday products and services.
But their ‘black box’ nature – the opacity with which they are designed and used – indicates an absence of human control and responsibility, ringing alarm bells. This lack of transparency undermines trust in algorithm-based decision-making and the organisations that use such processes.
Of particular concern to the Ada Lovelace Institute is “the expansion of algorithmic decision-making systems across the public sector, from AI ‘streaming tools’ used in predictive analytics in policing to risk-scoring systems to support welfare and social care decisions”.
There is a widespread lack of understanding about where and how such technology is being used, notes its associate director of public and social policy, Imogen Parker. Public bodies need to address this matter urgently if they are to engender trust in their decision-making processes.
Although the institute knows that “data-driven tools are being used to match people to the right services, assess visa applications, predict risk and even escalate families into children’s social care, we have a paucity of information about the public sector’s use of algorithms”, she says.
The government is at least piloting a transparency register for algorithms and consulting on whether this should be made mandatory. It is also committed to publishing a white paper on AI regulation early next year.
“Greater transparency is only the first step towards full accountability,” Parker stresses. “The ultimate goal must be to create systems that work for people. We need mechanisms that would involve people who are affected by these systems in developing and deploying them; in assessing their risks and impacts; and in enforcing sanctions as robust regulators who can pass judgment where needed.”
Stian Westlake, CEO of the Royal Statistical Society, agrees that organisations using complex algorithms need to engender trust among those whom their decisions will affect. They can start doing this by being clear about their intended applications for the technology.
“Trust can be harmed when opaque algorithms are used without good-quality data behind them,” he adds, stressing that local authorities and other public bodies should carefully assess the datasets they plan to use.
“Government data can be biased or simply wrong,” Westlake notes. “It is also a good idea, as the Office for Statistics Regulation has suggested, to test the acceptability of the algorithm with affected groups. “Public bodies should be able to explain how conclusions are reached in individual cases, not just fall back on ‘computer says no’.”
Parker adds: “We’re seeing a trust deficit in how data is being used. We’ve had the A-level protests and the successful legal challenge against the police’s use of facial-recognition technology in England and Wales. And, more recently, more than 3 million people opted out of sharing their medical information as part of the government’s General Practice Data for Planning and Research framework.”
The outcry over the A-level grading debacle, and the government’s U-turn in response to it, indicates that there can be no true algorithmic accountability without a “critical audience” that can communicate effectively with policy-makers. So says Dr Daan Kolkman, research fellow at Eindhoven University of Technology and the Jheronimus Academy of Data Science in ’s-Hertogenbosch.
A formal critical audience could take the shape of an independent, publicly funded watchdog that can draw on technical expertise to investigate systems, report in the public domain and, if necessary, provide redress to parties it deems unfairly disadvantaged by poorly designed algorithms.
Kolkman stresses that algorithms are only as good as the data they are fed and are inherently biased because people are. AI systems are trained on historic data patterns, many of which incorporate social inequalities and are therefore at risk of perpetuating such inequalities. He quotes renowned British statistician George Box’s cautionary words in saying: “All models are wrong, although some are useful.”
This is a widely acknowledged problem, which programmers have sought to address by making their algorithms more explainable. But, as Parker notes, “you can work to correct some technical biases, but there will always be biases. The key questions are whether those biases are legally acceptable and whether developers and public-sector bodies are anticipating and mitigating them where they can.”
She adds that “bias can arise where the data is of a high quality and ‘accurate’, because an algorithm encodes structural inequalities in society into the future and amplifies them. A good example of this is a hiring algorithm that gives high scores white men seeking senior roles because it has learnt that they are historically the successful candidates for such jobs.”
Six steps to best practice
How can public bodies avoid pitfalls when implementing powerful AI-based systems and ensure that they treat everyone fairly? Here’s a simple checklist:
1. Be transparent about your aims.
2. Assess datasets carefully.
3. Test the algorithm with affected groups.
4. Fully explain decisions.
5. Be accountable.
6. Cooperate with regulators and independent audits.