Joint Language and Translation Modeling with Recurrent Neural Networks
- Michael Auli ,
- Michel Galley ,
- Chris Quirk ,
- Geoffrey Zweig
Proc. of EMNLP |
We present a joint language and translation model based on a recurrent neural network which predicts target words based on an unbounded history of both source and target words. The weaker independence assumptions of this model result in a vastly larger search space compared to related feed forward-based language or translation models. We tackle this issue with a new lattice rescoring algorithm and demonstrate its effectiveness empirically. Our joint model builds on a well known recurrent neural network language model (Mikolov, 2012) augmented by a layer of additional inputs from the source language. We show competitive accuracy compared to the traditional channel model features. Our best results improve the output of a system trained on WMT2012 French-English data by up to 1.5 BLEU, and by 1.1 BLEU on average across several test sets.