The goal of a universal translator may not be that far off, if Microsoft Research has anything to say about it. The company has been working with the University of Toronto on improving both text and spoken word translations of languages. Recently Microsoft showed off their efforts to a Chinese audience.
A new post on the Next at Microsoft blog, written by Microsoft's Chief Research Officer Rick Rashid, talks about their efforts in improving translation and speech recognition. Rashid says that even the best systems still generate errors up to 25 percent of the time. Microsoft Research has now come up with a new technique that is based on human brain behavior.
Rashid states that the use of this new system, called Deep Neural Networks, allows them "to reduce the word error rate for speech by over 30% compared to previous methods. This means that rather than having one word in 4 or 5 incorrect, now the error rate is one word in 7 or 8."
The blog also has a video where he is speaking to a Chinese audience, and his English speech is being translated on the fly to Chinese. It is shown first via text form, and then with Rashid's spoken words translated into an automatic Chinese machine speech translator. He says, "It required a text to speech system that Microsoft researchers built using a few hours speech of a native Chinese speaker and properties of my own voice taken from about one hour of pre-recorded (English) data, in this case recordings of previous speeches I’d made."
The upshot of all this work? Rashid says flat out, "In other words, we may not have to wait until the 22nd century for a usable equivalent of Star Trek’s universal translator, and we can also hope that as barriers to understanding language are removed, barriers to understanding each other might also be removed."
Source: Next at Microsoft