Opinion

AI isn’t human, but we keep talking like it is – and it’s putting businesses at risk

Paul Maker, chief technology officer at software company Aimii, warns of the dangers of anthropomorphising artificial intelligence systems

Human Vs Robot

The tech world loves hyperbole. Every new product is “groundbreaking” or announces “the dawn of a new era” and the bigger the announcement, the better. At best, it feels contrived. At worst, it’s downright misleading. While still in its relative infancy, generative AI has unfortunately fallen victim to this same trend.

OpenAI recently described its o1-mini systems as capable of “thinking through problems like a person would.” Just last week, DeepSeek unveiled its new “reasoning” model, claiming it can fact-check its own answers. These statements are certainly attention-grabbing, but are they true?

Terms like “thinking” and “reasoning” are deeply ingrained in the AI lexicon. They’re convenient shorthand for describing technical processes, but they also evoke human cognition and suggest a level of understanding that AI simply doesn’t have.

Right now, AI doesn’t think. It doesn’t reason. It makes predictions based on statistical patterns. There is no doubt that its capabilities are impressive, but let us be clear: prediction and cognition are not the same. Blurring the lines between these concepts isn’t just semantics, it creates real business risks. 

The risks of anthropomorphising AI

Humanising AI in our language leads us to overestimate its capabilities, which can create dangerous dependencies. More often than not, this stems from a well-intentioned desire to streamline time-consuming or monotonous work. But when we believe an AI system can “reason” in the human sense, we are more likely to entrust it with higher stakes tasks, often without fully understanding its limitations.

This can result in dependence on AI for critical decision-making, misplaced trust in its ability to handle sensitive information, or an over-reliance on its outputs for forecasting and planning. The consequences of such misjudgement can be brutal: operational failures, regulatory breaches and reputational harm, to name a few.

Cutting through the noise

Part of the problem lies with tech demos and overzealous announcements, where much of this hype-inducing, humanising language originates. Our first exposure to new AI developments tends to be through these polished marketing tools, where we see AI “thinking” and “reasoning” in controlled environments that bear little resemblance to the real world.

In reality, AI systems don’t operate in such perfect conditions. Data inputs are often incomplete, noisy or biased and, as a result, the outputs can fall far short of expectations. I’ve seen this first-hand countless times: systems that perform seamlessly in controlled “lab conditions” failing miserably when confronted with the messy, imperfect data they encounter out in the wild.

Taking a pragmatic approach 

To avoid this outcome, businesses must take a step back and focus on what AI can realistically deliver for them. Start by asking: what specific problems am I trying to solve? How can AI help? And most importantly, what are its limitations? Without clearly defined objectives, you risk chasing the latest technology just to jump on the bandwagon.

Next, take the time to get your house in order. Good data is the foundation of successful AI and even the most cutting-edge tools will fail without the basics in place. Here’s how to lay the groundwork:

Organise your data

Audit your data to ensure it is consistent, accurate and high quality.

Label your data

Clean, label and update datasets to give AI models the structured, meaningful inputs they need to perform in real-world scenarios.

Control access

Establish strong data governance to ensure sensitive data is accessed only by those who need it, balancing security with usability.

It’s important to get a good handle on your data, but don’t let the process of sorting out your data become so overwhelming that it stops you from experimenting – that’s the fun part! Start small by focusing on specific use cases where your data is already in good shape and build your AI capabilities from there.

AI isn’t human and it doesn’t need to be. Its potential is immense and the opportunities it brings are genuinely exciting. As tech leaders, it’s on us to understand what AI can really do (and help our teams do the same). When we find balance and combine AI’s capabilities with human creativity and wisdom, we can achieve extraordinary results without getting caught up in the hype.

Paul Maker is chief technology officer at Aiimi.