
Last month, the announcement of a new model from the Chinese startup Deepseek sent tech companies on the Nasdaq into a tailspain.
Deepseek claims its AI assistant was trained and developed at a fraction of the cost of most large language models (LLMs) and uses less data and energy to run.
Chipmaker Nvidia alone lost nearly $600bn (£484bn) in value following the announcement. OpenAI – which has itself been accused of using data without permission – accused the disruptor of stealing its training data. The CEO of Meta, Mark Zuckerberg, assembled “war rooms” of engineers to figure out how the startup achieved its model.
Hyperscaler AI commitments unlikely to change – yet
However, some technologists have cast doubt on Deepseek’s cost efficiencies. It’s too early to reckon with the impact of Deepseek’s supposedly cheaper and more efficient AI models, says Simon Baxter, principal analyst at TechMarketView. In Baxter’s view, the stock-market chaos was a “knee-jerk reaction” to fears that Deepseek would slow growth for Nvidia and other providers in the data-centre space.
But it seems unlikely that growth will slow any time soon, he says, given the substantial AI commitments already made by both the hyperscalers and IT solution providers. “Any existing commitments to build AI infrastructure are likely to remain unchanged, though other factors like the current trade disputes could prove disruptive,” says Baxter.
Deepseek could provoke a shift from building to scaling
Generative AI requires large amounts of computing power to run. If the less energy-intensive model used by Deepseek works as claimed, providers might shift their focus from increasing their computing power to scaling AI more efficiently, says Haritha Khandabattu, a senior analyst at Gartner, specialising in AI.
There’s going to be an LLM price war
The overall cost of deployment won’t be significantly affected, says Khandabattu. Most end-user organisations are unlikely to run Deepseek-like deployments themselves, they’d still be managed by the big providers or their partners. For example, if Microsoft shifted to a more efficient scaling model, like Deepseek’s, for its Copilot service, end-users would probably be unaware of the change.
It’s a question of engineering and infrastructure investment for the vendors, rather than an operational consideration for most users.
However, Gartner does expect a decrease in pricing overall. If Deepseek’s model is as efficient as it claims to be, this upending of the AI computing model could help drive prices down. “Price will be a very big question,” says Khandabattu. “There’s going to be an LLM price war.”
But lower prices will be balanced by a need for more computing power to train and refine complex AI models, tailored to specific industries and use cases, adds Baxter.
China has truly entered the AI race
The big takeaway from the launch of Deepseek’s R1 model, says Baxter, is that China is now “fully part of the AI game”. He says that this will drive further innovation as model suppliers seek to compete and develop the next iteration of reasoning models.
“We’re already seeing several Chinese GenAI vendors reduce the inference costs of their large language models by over 50%,” adds Gartner’s Khandabattu. “And some by over 90%.”
Some organisations have raised the alarm over Deepseek due to its origins in China. The US Navy, for example, has already banned Deepseek and US lawmakers intend to follow suit by preventing its use on all government devices.
That’s not only due to where the company is headquartered. The Deepseek application has also been sending unencrypted data to third parties.
GenAI security and shadow AI still a risk
The main considerations when deciding to use Deepseek are risk and compliance. Shadow AI, where employees use tools such as ChatGPT without the permission of their organisation, remains a persistent issue for businesses. If employees upload confidential data to GenAI platforms, it can create compliance and data security problems.
“Every organisation is going to have its own view of risk,” says Ray Canzanese, director of threat research at cloud-security company Netskope. This view of risk should factor into adopting any tool, no matter where it originates. “Whether you’re looking at Deepseek or ChatGPT, organisations should ask: what’s the agreement you sign when you start using it? How is your data being used?”
Compliance has been the number one concern since the beginning of the GenAI hype cycle
“Compliance has been the number one concern since the beginning of the GenAI hype cycle a few years ago,” Canzanese adds. He says, when ChatGPT first came out, Netskope’s clients took a cautious approach to implementation and blocked it until they understood what it does, how it used their data and whether it had an valid business use cases.
However, Deepseek could be more secure for end-users than some of the most popular generative-AI platforms, provided organisations host the AI model themselves. Deepseek partly open sourced its model, so anyone can audit certain parts of the code for themselves.
Organisations self-hosting Deepseek can ensure they know the model is running on a server they control, with data only being sent locally. The amount of information you’re sending back to third parties is therefore greatly reduced, says Canzanese. This is as long as you’re constantly reviewing the software and making sure “there’s no phoning home” or “sending back of any data or telemetry elsewhere”.
This is the least risky way to run an AI model, according to Canzanese. In this sense, Deepseek is more similar to Llama from Meta than it is to ChatGPT. “This is something where you can download the model and use it locally – that’s certainly what I would recommend,” he says.
So yes, Deepseek matters – but it may be a while before its full impact is felt.

Last month, the announcement of a new model from the Chinese startup Deepseek sent tech companies on the Nasdaq into a tailspain.
Deepseek claims its AI assistant was trained and developed at a fraction of the cost of most large language models (LLMs) and uses less data and energy to run.
Chipmaker Nvidia alone lost nearly $600bn (£484bn) in value following the announcement. OpenAI – which has itself been accused of using data without permission – accused the disruptor of stealing its training data. The CEO of Meta, Mark Zuckerberg, assembled "war rooms" of engineers to figure out how the startup achieved its model.