Recurrent Neural Networks for Language Processing

成立时间:November 23, 2012

This project focuses on advancing the state-of-the-art in language processing with recurrent neural networks. We are currently applying these to language modeling, machine translation, speech recognition, language understanding and meaning representation. A special interest in is adding side-channels of information as input, to model phenomena which are not easily handled in other frameworks.

A toolkit for doing RNN language modeling with side-information is in the associated download. Sample word vectors for use with this toolkit can be found in the sample_vectors directory (be sure to unzip), along with training and test scripts. These are for Penn Treebank words, and achieve a perplexity of 128; removing the context dependence results in a perplexity of 144.

As described in the NAACL-2013 paper “Linguistic Regularities in Continuous Space Word Representations,” we have found that the word representations capture many linguistic regularities. A data set for quantifying the degree to which syntactic regularities are modeled can be found in the test_set directory of the download.

人员

Chris Quirk的肖像

Chris Quirk

Partner Researcher

Michel Galley的肖像

Michel Galley

Senior Principal Researcher