FOR: The justice system needs to embrace machine learning
Some say allowing artificial intelligence (AI) to determine guilt or innocence in a courtroom is a step too far. But for those who are sceptical about the neutrality of human judgment, or have witnessed an unfair justice system in action, AI and legal robots could be the answer to providing a fair and impartial jury.
We already automate so much else in society, so why not extend this smart automation to juries? After all, lawyers rely on technology to scan documents for keywords or evaluate collected data. And people can now use legal chatbots to determine if it’s worthwhile to pursue their case in court. There are even apps which help pair up lawyers with claims and automate legal requests.
So having AI legal robots to replace jurors wouldn’t be a huge step. When we talk about AI replacing traditional jurors, we’re not talking about scary human-like robots you see in sci-fi movies. Instead it would just be an algorithm that helps determine certain things, such as the risk of somebody being allowed to remain in the community, based on collected data.
Technology doesn’t hold grudges nor does it lack the information to make a decision. It can help dissect the facts in a more efficient, objective and informed fashion, and save time when determining a judgment or even sentence if the judge’s role was to be augmented.
In fact, a legal robot could be crammed with a far broader range of facts and figures about the nature of crime, cases on record and the law, making it much more worthwhile than a juror who has little awareness on such matters.
Machine-learning could not only make it possible to have a highly knowledgeable juror, but it could also remove all discriminatory factors which may exist in a courtroom.
Legal robots banish bias from the courtroom
Which brings us to the next point: people are flawed. They hold pre-existing biases and judgments about issues, people and experiences. As such, they can never truly approach a case with a clean slate.
For example, in accusations of rape, women are often subjected to seemingly harsh scrutiny and invasive questioning. They may be grilled on personal information, which could have no bearing on the decision, such as their sexual history, what they wore at the time of the alleged attack. Prejudices insinuate the incident was partly the woman’s fault and she had somehow “asked for it” to happen.
And this may play on jurors’ own biases, which could at least partly explain why the number of people prosecuted for rape fell by 26.9 per cent in the UK in a year.
Expecting randomly selected members of the public to decide the fate of a person in a jury system is outdated because the notion of a fair and impartial jury doesn’t exist. Human testimonies do not need to be judged by fellow human beings because we can never rid ourselves of bias. In arguments for and against juries, we should recognise the limitations inherent in being human and accept AI is here to help.
AGAINST: AI is no answer to a human trial by jury
Arguments for and against juries weigh decisions made by humans against those generated by AI. Indeed, human judgment is rarely perfect. We may never have all the answers, or knowledge, about a legal predicament. But technology isn’t without its flaws either. In fact, it can be just as biased as humans.
After all, AI, computers and legal robots are made by humans. Technology, like humans, can make mistakes and hold the same discriminatory factors. For example, people of colour are more likely to trigger a “false positive” match than white people on facial recognition software, which means they are more likely to be subjected to a wrongful police stop and search.
Even the most sophisticated AI can inherit the racial and gender biases of those who create it
Joanna Bryson, professor of computer science at the University of Bath, found in her research that even the most sophisticated AI can inherit the racial and gender biases of those who create it. A robot juror may, therefore, hold the same prejudices as its creators.
The process of a decision made by AI would also lack transparency. If a human jury finds a person guilty of a crime, they’re able to discuss their decision and explain how they arrived at that conclusion. But a robo-jury wouldn’t be able to describe the nuances that lead to a decision or be fully capable of understanding matters, often emotional, which are uniquely human.
This is to say, we should consider the need for human judgment in our arguments for and against juries, and question whether technology can truly serve as a fair and impartial jury. Human testimonies should be judged by fellow human beings, especially when judgment could result in years behind bars.
What happens when legal robots are hacked?
A further issue with allowing legal robots to enter our courtrooms is the ownership of the robots. Who designs the algorithms and educates their processes, and can the makers be trusted to provide a clean slate for the robots to make fair judgment?
Makers of legal robots may be subject to other distinctly human traits, not even unconscious bias, such as being susceptible to bribes or a corruptive influence. Simply put, legal robots could be hacked to benefit the accused. And if they’re privately owned, it may mean there could be little transparency behind how the robot came to a conclusion and whether the decision may have been interpreted, or intercepted, by an external body.
We should embrace the useful ways technology aids the criminal justice system, such as allowing legal chatbots to give free legal advice or using data systems to process information quickly. But it’s a frighteningly dangerous route if we start putting people’s fate in the “hands” of legal robots. Trial by a jury, recruited from members of the public, may never be perfect, but replacing it with a robo-jury is not the answer.