The oldest-known literary work, the Epic of Gilgamesh, was recast from the ancient Akkadian into other languages thousands of years ago and was possibly the first text to be translated. Fast-forward a few millennia and, although translators have mostly stopped carving cuneiform into stone tablets, translation long remained a difficult, painstaking process – until very recently, when machine translation changed everything.
Computer-assisted translation was once a very difficult problem to solve. Despite significant investment from the Soviet Union and the US military’s research division DARPA in the years following the Second World War, many attempts to crack the challenge fell flat.
So when Google released its initial translation service in 2006, it was an impressive if unreliable achievement, which relied on predictive algorithms that are rudimentary by today’s standards. But when the company switched from “statistical machine translation” to a neural network a decade later, its machine translations were vastly improved.
Competitor DeepL was founded just a year later, in 2017, and uses its own neural network – a term used to describe AI models that loosely mimic how the brain functions – that it claims give it an edge over its competitors. While DeepL’s usage numbers pale in comparison to those of Google Translate, which has more than 1 billion installations, it has sizable investor backing and was recently valued at $2bn (£1.58bn) following a $300m (£237m) cash injection in May.
Speaking the same language
The Cologne-headquartered startup has been successful at converting multinational companies to its services. DeepL claims to have over 100,000 business customers, including Nikkei, Fujitsu, Deutsche Bahn and half of the Fortune 500. Use cases range from translating customer support messages to improving internal communications within global enterprises.
“Language poses a really tough challenge for businesses, especially those that are global,” Dr Jarek Kutylowski, CEO and founder of DeepL, says. “There are multiple ways you can tackle the problem: by recruiting people with language skills, using translation agencies or by employing technology.”
For instance, one unnamed customer, a car manufacturer with an R&D department in Japan, uses DeepL’s technology to translate emails sent between its Japanese staff and the US, where its main commercial presence is located. Previously, communication between the offices was a logistical challenge but the technology allows staff to communicate more effectively, saving time and resources.
Translating technical or legal documents is another pressing business challenge, especially for law firms, which, for compliance reasons, often prefer relying on a single platform over turning to translation agencies, according to the CEO.
“It’s sometimes hard to find fluent human translators who are confident enough to translate complex documents but AI can quite often cover technically or legally complex topics very well,” Kutylowski says.
Lingua franca
As translation technologies improve, there may be broader implications for skills and the way businesses are structured. AI translation is allowing non-English speakers greater access to the English-speaking world – 49.9% of online content is in English. “That’s really changing some people’s worlds,” says Kutylowski.
The status of English as the lingua franca for business is unlikely to change within the next two decades, at least according to the British Council, which tracks the use of the language in its Future of English: Global Perspectives report.
Some translators have even expressed their despair that the accuracy of DeepL’s translations might make their roles redundant in the near future – and speaking English could become less of a foundational skill in the global business landscape.
“I think to an extent that’s going to be the case,” says Kutylowski. Text translation is already good enough for remote organisations to communicate effectively via text-based tools like Slack. A common language can help but it’s no longer strictly necessary.
“The ability of people to get by in a business context, without knowledge of the local language, is going to increase,” Kutylowski says. “The importance of language skills is going to drop. The ability to speak another language will still be an advantage but most people will be able to do without it.”
Advances in Generative AI make it possible for translations to improve even further, correcting conversations for language, tone and context in real time by making machines multi-lingual. This kind of quick-fire translation could bring to life the future-fiction staple of universal translators as found in Star Trek and Dr Who – at least as far as the written word is concerned.
“With text, we’re really there already,” says Kutylowski. “I’ve heard anecdotes of people conversing online who were fully convinced that the other person could speak their language. Primarily Slack-based organisations can already work in whatever language they choose.”
However, real-time speech poses another challenge to solve due to the way native speakers communicate. When we talk, we are usually thinking on the fly, anticipating how sentences are likely to end or interpreting other contextual clues as the conversation flows.
“In order to produce an accurate audio translation you have to hear the sentence until the end,” says Kutylowski. “There’s an inherent delay in there. Even if latency is reduced to zero milliseconds, you’ll still only hear the translation after I’ve said the sentence.”
So unfortunately for sci-fi fans, a true universal translator may remain confined to science fiction, for the time being at least.