In the last few years there have been massive improvements in machine-translation software. The promise of replacing interpreters and translators with machines is held out as a real possibility in businesses and institutions. Is it just hype or is it on the horizon?
Machine translation works on either or both of the following key principles:
- Rules engine, usually combined with fuzzy logic learning or neural network
- Statistical analysis of a large corpus of trusted material (for example Google Translate)
The idea of machine-based interpretation extends this with the addition of:
- Natural language processing
- Voice recognition
- Text to speech conversion
I expect huge improvements in each of these areas but I suspect that the replacement of people in the workflow will never be achieved in scenarios that demand precision and accuracy. Why? Because language is neither defined by rules nor statistics. It is a changing field, with each person using language differently. The rules change and the statistics of yesterday do not apply to today. Does that mean that these areas of research and development are useless? Not at all. It just means that there needs to be some realism in their application.
It reminds me of previous sagas in human history. The claims that religion would be totally wiped out as reason took its place; the claim of science curing all ills (followed by disillusionment with the nuclear bomb). Technology is seen as a solution to the world’s issues, improving the quality of life for everyone, solving every single ill perfectly. Google does not sound so unreasonable when it states that it hopes to organise all the world’s information in 300 years (http://news.cnet.com/8301-10784_3-5770305-7.html).
What do you think?