Among the best-known online software there is Google Translate, which for 14 years (it was launched in 2006) has shown that “machines” are not yet able to perform translations from one language to another, like a real-life translator.
Nothing to be surprised about: providing an automatic system with the grammatical rules of some languages and the vocabulary needed to translate words from one to another is not enough to correctly recreate the nuances, subtleties, context that characterize something as complex and creative as human language.
Since their creation, machines have replaced the work of man when it came to performing complex calculations (because the parameters imposed were stringent) but they go into crisis when they are challenged with something that involves creativity.
Recently, this has started to change.
With the advent of deep learning (the system today has become almost synonymous with artificial intelligence) we have witnessed not only incredible achievements in areas that have very little to do with mathematics (from image recognition to the creation of pictures), but also artificial intelligences that successfully compete in even more complex areas, including language.
The forerunner in this field, only three years ago, was Google Translate, which in November 2016 began to exploit deep learning and immediately made an impressive qualitative leap.
To explain in detail how this qualitative leap took place would be too long.
In a nutshell, we can say that – thanks to deep learning – Google Translate stopped strictly applying the rules and vocabulary of the languages it was trying to translate (without any success) and began to analyze what was the correct translation of a word or phrase based also on the context. Or rather, on a statistical evaluation of what is the correct translation of a word based on the other words that appear nearby (based on the huge database available).
Once the extraordinary progress made by Translate was assessed, even creative and intellectual work such as the translator was inevitably included in the lists of “endangered works”.
And it is from here that, in a long article published by Hofstadter (professor of Cognitive Sciences and Comparative Literature), he sets out to develop a fierce criticism aimed at those who confuse language with a code to be cracked and believe that machines can really understand written texts and replace human translators.
Machines, of course, can’t really understand a text.
As Hofstadter notes, machines can, if anything, bypass or evade the true understanding of a text. In a nutshell, they must correctly reproduce a text in another language (with all its nuances, ambiguities and idioms) even though they cannot understand its meaning.
Artificial intelligence literally reproduces idioms, making them lose all meaning, or wraps itself in complex periods producing sentences that simply make no sense.
Hofstadter can thus sing victory: “We humans know everything about relationships, households, personal possessions, pride, rivalry, jealousy, privacy and many other abstract concepts that lead to oddities like married couples who have towels embroidered ‘his’ and ‘hers’ on them. Google Translate is not familiar with these situations.
Google Translate is not familiar with these situations, period”.
Put to the test by Hofstadter even with excerpts from German and Chinese novels, the machine continues to fail, failing to reproduce words correctly when they take on meanings other than the most common ones or recreating entire periods in a confusing if not incomprehensible way (especially from Chinese to English).
The conclusion is clear: the statistical (and probabilistic) approach of artificial intelligence, although based on an immense amount of data, cannot compete with man’s ability to understand the nuances of one language and recreate them in the most respectful way in another.
But this is not a surprise: what should we expect from a machine that is dealing with something as incredibly complex as human language?
The real surprise is that, in many other cases, Google Translate manages to get around the need to really understand a text and still return an accurate translation.
In the cases brought by Hofstadter this has rarely happened.
The really important point is therefore one: should we be surprised that an artificial intelligence often mistranslates complex narrative works, or should we be surprised that in many other cases it manages to translate almost correctly?
And so, will translators really disappear from circulation?
The most likely answer to this second question is: it depends.
If we’re talking about translators working on scientific texts (like a school chemistry manual) or even instruction manuals, it’s very likely that artificial intelligence algorithms will soon be able to handle themselves (with some supervision, they already are).
But if we are talking about translating a poet like Pushkin (a work in which Hofstadter himself has tried his hand), we can rest assured: only the human being is able to translate such complex works, which inevitably require human intellect, creativity and understanding.
Even if, with that piece by Dostoevsky, Google Translate did just fine.
To find out more about the Babel Jumper project and the sale of BBJT Tokens, visit our website www.bbjtoken.io and our Social channels.