
Some day soon, the pub debates of yore, where excruciating arguments over small factoids could fill multiple hours, may be all but forgotten. Smartphones drained the life from this pastime, as suddenly the answers to most of our questions were available with just a few taps on our handy devices. No longer must we debate the least popular flavour of crisps, who won the 1985 FA cup or how many films Grace Kelly starred in – just look it up.
Psychologists call the tendency for people to rely on search engines rather than their memory recall the Google Effect. It’s not difficult to understand the reason behind this phenomenon: with the internet always at our fingertips, it’s simply easier to hammer out a search query than it is to mine our memory banks. We wilfully relaxed our brains, and the internet stepped in to fill the gap.
Humanity is now coping with another technological step-change: generative AI. Researchers are keen to understand how an over-reliance on AI could impact our cognitive abilities – and so far their findings paint a bleak picture.
‘A fairly serious threat’
Determining whether the use of GenAI is hastening cognitive atrophy is important, considering Silicon Valley’s tech titans, with billions of dollars invested in AI infrastructure, appear keen to cultivate our reliance on the technology.
Ironically, confirmation of such worries comes from one of AI’s biggest corporate backers, Microsoft, which, in conjunction with Carnegie Mellon University, found that the use of GenAI may indeed diminish our cognitive abilities.
The researchers asked 319 knowledge workers to report how they use GenAI. They found that respondents who were sceptical about AI systems were more likely to engage their own critical-thinking skills. But those with greater confidence in AI tools were less inclined to steer or monitor them – especially in seemingly low-stakes tasks. Such behaviour could result in “diminished independent problem-solving,” according to the researchers.
In general, people seem to trust GenAI far more than they should
Researchers also witnessed a shift from so-called ‘task execution’ to ‘critical integration’. Rather than performing tasks themselves, employees using AI systems became mere overseers of their tech tools. Moreover, those who turned to GenAI to assist with critical-thinking tasks generally produced less-diverse outcomes than those who did not.
According to Gary Marcus, professor emeritus of psychology and neural science at New York University, the prevalence of GenAI presents a “fairly serious threat” to our cognitive abilities and critical thinking skills.
“In general, people seem to trust GenAI far more than they should,” he says. “Because the output looks good, people rarely dig deep enough to see whether it is correct – and they rely on it more and more, even when they shouldn’t.”
Marcus, who has called for a pause on training powerful LLMs until more appropriate safety regulation is established, takes a dim view on GenAI in the workplace. Some future form of AI might help us remove the mundane tasks from society and enable us to focus on the more creative ones, as Silicon Valley firms promise, but we’re not there yet, he says.
“GenAI is really not reliable or trustworthy, so handing off power to it is often not a great idea,” Marcus adds.
People tend to trust GenAI regardless of whether or not it’s up to the task, says Dr Alexandra Dobra-Kiel, a behavioural scientist and director at the Behave consultancy. This misplaced trust, she argues, can reduce our capacity for problem-solving. When people are busy they like to take shortcuts – and GenAI is, if nothing else, a shortcut.
“Employees can rely on its outputs without really considering the human oversight behind it, taking it at face value and hitting copy and paste,” she adds.
GenAI: who’s really in control?
Today’s GenAI platforms are largely the domain of Silicon Valley. Data experts have repeatedly highlighted the risks of algorithmic bias, but as more decision-making is outsourced to machines, the risks of biased or flawed outcomes are increasingly threatening. What happens, asks Dobra-Kiel, when the world’s internet users turn to the same five or six AI tools, all trained on broadly similar data?
This could lead to homogenised outputs and thinking – a kind of flattening of human creativity, in favour of the ostensibly “optimal” black-box algorithms. Such homogeneity could also stifle innovation in businesses that have uncritically gone all-in on AI.
Dobra-Kiel explains: “I have asked all these different platforms the same sort of questions and they all respond in a very similar way. This is worrisome: could information outputs become so standardised that they reinforce a loop of homogenisation? If this becomes the norm, perhaps some human characteristics, such as emotion and cultural nuance, will be eliminated from communication.”
She continues: “For instance, I communicate in different ways when the recipient is German or French or English – with AI models, that’s completely removed. It also raises questions about cultural identities, and perhaps even the identity of the self, in corporate contexts.”
Is GenAI changing, rather than eroding, critical thinking?
Perhaps we’re not thinking critically enough, with or without AI, says Dr Rebecca Hinds, who is head of the Work Innovation Lab at Asana, a software company.
“Most workers’ time is spent on busywork,” she says. “In today’s workplace, where employees are constantly multitasking and fighting for focus, critical thinking is in short supply. AI could be exposing or amplifying that reality, rather than creating it.”
However, she acknowledges that AI “may reduce critical thinking” when “used passively” – in other words, when outputs are accepted without question or if AI is used to automate routine tasks.
It all depends on who is using AI and how. According to Hinds, recent studies haven’t “truly examined whether AI is decreasing critical thinking, because they often overlook a key comparison – how people use critical thinking with AI versus without it”. Instead, the studies focus on what leads workers to use more or less critical thinking across various AI tasks, without establishing a baseline for critical thinking in the absence of AI.
“There’s a high bar for interpretability, and this demands critical-thinking skills,” counters Hinds. “Our research shows that 40% of workers expect explanations for all AI-generated outputs. They don’t just want an answer, they want to understand why AI reached a conclusion.”
Critical thinking required
Even as businesses accelerate their adoption of GenAI, around 81% of the US workforce aren’t using the tools at all. But GenAI is being foisted upon users whether they want it or not – it’s being integrated into office software, including email, customer service and enterprise platforms.
The technology is particularly popular among younger workers. Young leaders are using GenAI tools at least twice per week, according to a recent survey. Therefore organisations lacking their own GenAI platforms may see employees making use of unsanctioned tools – so-called shadow AI – which put their data at risk.
Dr Michael Gerlich is a professor at the Swiss Business School. He recently published a paper investigating AI’s impact on the future of critical thinking. Gerlich’s paper, which acknowledged the preference for AI assistance among the young, was met with concern from educators, who noted that critical thinking develops during adolescence and warned against any offloading of thinking process, especially in these years of crucial development.
Given today’s workplace overload, critical thinking is in short supply. AI could be amplifying that reality
But, according to Gerlich, AI users can combat cognitive offloading by challenging and interrogating the platforms they use. He suggests thinking of AI tools as people-pleasing acquaintances, who always tell you what you want to hear rather than what you need to hear.
“AI makes life more comfortable for us,” Gerlich explains. “We get a quick solution.” But using AI out of ease, rather than suitability, risks damaging our cognitive abilities.
GenAI providers have designed their platforms to be as frictionless as possible. Firms will never make their systems more truculent because they’ll likely lose users to the competition. It is up to the user therefore to embrace discomfort and create a more confrontational back-and-forth with the platform.
“You have to force yourself to avoid the confirmation bias that’s systemic with GenAI,” he advises. “Command it to not predict what you would like to hear from it; to provide a critical analysis or opposing opinions.”
Modern problem, ancient solution
“There is no option for companies to not use AI,” Dobra-Kiel says. “That would be suicidal.” The technology therefore will almost certainly become ubiquitous, so we had better find a way to keep our cognitive abilities sharp as we use it.
One possible solution is to engage the AI bot in a Socratic dialogue – the classic critical-thinking method based on asking questions and challenging assumptions.
“A Socratic framework can help you challenge your own assumptions as well as the assumptions the AI outputs are based on,” Dobra-Kiel says. “Companies must train their staff in that kind of thinking if they want to avoid purely homogenised outputs.”
Such an approach must be informed by genuine concern for human thinking and ethics. Too often, Dobra-Kiel adds, when we discuss responsible AI, the conversation falls into the territory of compliance.
“Ethics is not compliance. Compliance is an audit-based approach, which often stifles experimentation or innovation. Ethics is really about nuance and dilemmas and this is why critical thinking is key,” she explains.
Society is only beginning to understand AI; it’s nature and potential impacts. While it may be too late to reverse the Google Effect and save our memories, armed with humanity’s quintessential critical-thinking technique and a clear understanding of the risks of AI offloading, we may yet be able to save our cognitive abilities.

Some day soon, the pub debates of yore, where excruciating arguments over small factoids could fill multiple hours, may be all but forgotten. Smartphones drained the life from this pastime, as suddenly the answers to most of our questions were available with just a few taps on our handy devices. No longer must we debate the least popular flavour of crisps, who won the 1985 FA cup or how many films Grace Kelly starred in – just look it up.
Psychologists call the tendency for people to rely on search engines rather than their memory recall the Google Effect. It's not difficult to understand the reason behind this phenomenon: with the internet always at our fingertips, it's simply easier to hammer out a search query than it is to mine our memory banks. We wilfully relaxed our brains, and the internet stepped in to fill the gap.
Humanity is now coping with another technological step-change: generative AI. Researchers are keen to understand how an over-reliance on AI could impact our cognitive abilities – and so far their findings paint a bleak picture.