How Big Tech is Using Neural Machine Translation to Bridge Borders


As technology for machine learning (ML) and artificial intelligence (AI) develops, one industry experiencing a big impact is translation. Recently, three of the major players in the tech world announced that they had already successfully reached major milestones in the field, or were well on their way to doing so.

Facebook announced that they were making the shift towards a new neural network system for translation, and Microsoft laid ground for making their own translator hub, which is expected to be done using AI and deep learning. The most impressive of these announcements was Google’s Neural Machine Translation (GNMT), a system that introduced a completely new world of machine translation.

What is NMT?

Most media translation services nowadays will translate a phrase or sentence word by word. This often results in less exact translations, as it is easy for this system to miss out on contextual meanings.

Neural Machine Translation, on the other hand, is the process of training a large neural network to optimize translation performance. It does this by using neural network models to learn statistical operations for machine translation. The system can be trained as one, allowing for a single output rather than multiple.

To understand what this means, we must first understand how a neural network operates. An artificial neural network is made of units: input, hidden, and output. These units are connected to the ones around it, with stronger connections carrying more ‘weight’. An input calls forth all relevant hidden units, which are then translated into the output.

Neural networks learn from comparing the output it produces with what it was supposed to generate. Any differences between the two are calculated, which leads to perceived weights being modified. A translation neural network is also advanced enough that it has contextual awareness, which allows it to adapt to the environment around it.

Collaboration is the Key to Moving Forward

For NMT to be transformed into a highly optimized platform, companies need to be willing to collaborate. It isn’t feasible to expect that one tech giant would be able to create a perfect system on its own.

Freedom of information and the sharing of ideas will lead to a more successful and well-rounded operating system. Although Google holds the patent for how their own NMT was implemented, there are still hundreds, if not thousands, of developments that need to occur in order for it to work properly and become completely accurate.

Fortunately, this sharing of information has become easier in the last year. Facebook started off by open-sourcing Torch, their own deep learning library, as well as its AI server Big Sur. Google later provided open access to its Tensor Flow ML library, and Amazon followed suit by making its deep learning software DSSTNE available to the public.

Hardware advancements will also help to push this process along. In May 2017, a startup called Nervana announced that it, along with Google, was working on building a customized processor for these neural networks and ML applications.

These advancements will help run NMT programs at a rate that is 10X more efficient than the GPUs (graphic processor units) that are currently being used for neural network computations.

NMT and Humans Working Together

As NMT brings machine translation into the future, human translators will have an invaluable tool for providing the highest quality translations as the needs continues to grow. With our ever-globalizing world getting smaller and smaller, reliable translation services will prove invaluable for customers who interact with one another from all corners of the globe.


Please enter your comment!
Please enter your name here