Everybody who keeps up with scientific news no doubt knows about the current research being done in making machines with human intelligence (also known as artificial intelligence, or AI). This is certainly not a new dream; scientists have been working on the problem for decades (and indeed sometimes it seems that software engineers do little else). However, despite the time that has been put into solving this problem, it hasn’t been until recently that we have been able to make an AI capable of doing things that a human can do.
One recent advance is the self-driving car, which used to be a fantasy but has been made a reality by Google. It has been shown to be able to drive itself for thousands of mile without human guidance or accidents. This is possible through a combination of new sensor technology and clever algorithms. However, is the car actually intelligent, or is it just following a set of rules?
This is the fundamental question behind much of AI research–can we make a machine that actually thinks and creates? One fundamental part of intelligence seems to be the ability to use language. Certainly this is what sets humans apart from many other animals.
Although the competition between Microsoft and Google is certainly nothing new, their recent fight over translation apps is particularly important because it marks an important advance in artificial intelligence. Last December, Microsoft’s Skype Translator provided a way for mobile users to translate speech, the first time that this has shown up on the market. A month later, Google released their own version of the app.
This new development spurred us to try comparing machine translators to our own human translators. We firmly believe that a major part of the translation process is accurately carrying nuance across the language barrier. “Yeah, right” is very different from “okay,” but a machine would not necessarily understand that. This is what we wanted to put to the test.
We decided to test both written and spoken translators. We had to use Google’s app (and tested both the desktop and mobile versions) instead of Microsoft’s because the Skype Translator was not available to test. On the human side, one of our top Spanish translators, Adriana, volunteered to translate the same document that the machine translator would. Gaby, another one of our translators, would judge both outputs based on grammar, cultural idioms, comprehension, and general strengths and weaknesses.
By the end of our testing, it was clear that, while Google’s app was very good at translating the gist of both the document and recorded speech, it was unable to effectively translate the nuances. It tended to translate extremely literally instead of interpreting idioms as idioms. One example of this was “Lets get high demand in foreign markets and provides services,” which is, at the very least, not grammatically correct.
Our human translator, on the other hand was able to translate the original connotations and intent. Her version of the above example was “it allows for creating high demand products in foreign markets and provides services” which flows much more smoothly.
Clearly, humans are still superior to machines… at least as far as translation is concerned.