Hannah Fry is getting exercised about a domestic appliance. The refrigerator in question was in perfect working order, but “it had a sticker on it that said: ‘This fridge is AI ready.’ I just don’t know what on Earth that means.”
If she can’t understand that claim, there won’t be many people who can. Fry, a professor in the mathematics of cities at University College London, is a seasoned public speaker and broadcaster who won the prestigious Zeeman Medal in 2018 in recognition of her work to improve the public’s understanding of maths.
A year later she published Hello World: how to be human in the age of the machine, which gained widespread acclaim and a place on numerous award shortlists. The follow-up, out this week, she has written with geneticist Adam Rutherford, her co-host on The Curious Cases of Rutherford and Fry, a BBC Radio 4 series in which they use science to solve mysteries submitted by listeners.
Rutherford and Fry’s Complete Guide to Absolutely Everything (Abridged) seeks to challenge some of the assumptions we make about the world and show us how to “bypass our monkey brains”, which have evolved to “tell us all sorts of things that feel intuitively right but just aren’t true”.
Continuing on the theme of absurd claims concerning AI, Fry recalls talking to an entrepreneur who claimed that he could use the technology to analyse and improve film scripts.
“He said he could tell you how to change one word in the script to make your film do better at the box office,” she explains. “How do you validate something like that? It’s just snake oil, but it’s kind of genius in a way, because it creates something where you can’t possibly conduct a controlled trial, so you could never test the claim.”
‘Crap science’
Clearly on a roll with debunking what she calls “crap science”, Fry is particularly scornful about some of the wilder claims concerning facial-recognition software. In particular, she points to a company that purports to have developed a system that can measure how much attention pupils are paying in lessons by analysing their expressions.
“There’s also an algorithm that’s been used to detect the ‘true pain’ – I’m doing air quotes here – of someone based on their expression, to decide whether they should receive painkillers for their chronic condition,” she says with a note of disbelief. “That’s an example of crap science with AI stuck on top.”
Even Amazon’s widely admired product-recommendation engine, she adds, is only a few percentage points more accurate than randomness.
While it’s clear that she has no time for anyone making exaggerated claims about the technology’s applications, Fry stresses that it is “possible for AI to have genuine, monumental potential. There’s low-hanging fruit all over the place. If you look at some of the papers that have been published in journals such as Nature and Science about how really good, sophisticated AI is being used, it’s obvious that there’s incredible potential.”
It’s therefore important to differentiate what works well in the laboratory from what’s possible outside it. AI may well produce useful and interesting results in a controlled setting, but that doesn’t mean it will deliver these in the real world, which is a lot more complex and random. The problems tend to arise when people overestimate AI’s ability to predict human behaviour. The focus should instead be on its capabilities as an incredibly sophisticated pattern-hunter, she suggests.
“Take image-recognition software that analyses mammograms, for instance. You can demonstrate that this works in a lab setting,” Fry says. “But it’s a very different matter to integrate the technology into a hospital setting and have it work with real patients alongside real pathologists so that it makes a positive difference instead of overcomplicating things. That, I think, is where my scepticism lies. People sometimes just see the word ‘AI’ and it’s all sparkly and magical. It can make them forget about all of the other important things that have to go alongside it.”
Keep it focused
AI works best, she argues, when its use is highly targeted and the outcomes sought from it are appropriately specific. Fry, who has worked with Google’s AI division, DeepMind, cites a development released by DeepMind last month in the field of ‘nowcasting’ meteorology. To predict rainfall in a given region over the next couple of hours, several consecutive radar observations over the previous 20 minutes are compared with previous examples and used as a ‘generator’ for machine learning. It’s shaping up to be a very accurate system.
Another business that’s applying AI sensibly and effectively, according to Fry, is Manchester firm Howz. “I love them,” she says.
Howz has created an app in which AI learns the typical electricity consumption patterns of an elderly person living alone by monitoring their smart meter. If it detects a sudden decline in activity – a clear sign that all may not be well – the system will automatically alert that person’s carers and loved ones. This is a great example of AI solving a specific problem and meeting a particular need, Fry says.
Why numeracy counts
Why are so many people (including senior decision-makers in business and government) getting it wrong about AI, overestimating its powers and falling for crap science? Are too many of us dazzled by the technology because of our poor grasp of STEM subjects?
“Over the past 10 years, I’ve noticed that people have started recognising that, regardless of how they feel about maths, they have to take data seriously,” Fry observes. “For this reason, the level of maths literacy – or at least the desire to be maths literate – has changed. But people are still quite scared of the subject. In some ways, it takes a while for this stuff to filter through. I think it will be 10 or 15 years before people have the training they require to use these tools effectively.”
But AI still seems to inspire a mix of admiration and, more crucially, fear among many non-experts. Fry points to the backlash against the government’s plans to use an algorithm last year to decide what grades pupils would have achieved if they’d been able to sit their exams. “It wasn’t even really an algorithm,” she says. “It was just statistics.”
In another school-based example, the Massachusetts Institute of Technology did come up with an algorithm. Designed to improve the efficiency of school bus services in Boston by adjusting their routes and timetables, the system ticked many of the right boxes, but a group of parents were unhappy about the new pick-up times. Pointing out that a spooky black box was behind the changes lent weight to their opposition campaign, says Fry, who argues that we tend to be more forgiving of human error than we are of the technology when it doesn’t get things quite right first time.
Meanwhile, businesses are collecting record amounts of data, even if they don’t yet know how to obtain the maximum value from it. Fry acknowledges that the trend is making both the public and the government increasingly concerned about privacy, but she believes that, “when you get people who really understand how to extract meaning from the right kind of data, you don’t need something invasive”.
Intellectual humility
More generally, Fry’s solution to the data privacy issue is for those collecting all this material to be totally clear about what they intend to do with it. If a company can prove its willingness to disclose all its data-processing policies and discuss these in an open forum, this could even form part of its ESG credentials.
“Maybe I’m being naive, but can transparency ever go wrong? Perhaps it’s because I come from the world of science, which is based on openness and knowledge-sharing,” she says. “If you’re open and you have intellectual humility, you invite comments. Does that ever go wrong?
Fry’s cautiously optimistic view of our future with AI – based on practical approaches that acknowledge its limitations, along with a willingness to explain to non-experts how the technology is being used – will strike a chord with many. Just don’t get her started about that fridge again.