Multi-task Learning for Natural Language Generation in Task-Oriented Dialogue
- Chenguang Zhu ,
- Michael Zeng ,
- Xuedong Huang
Empirical Methods in Natural Language Processing (EMNLP) |
Organized by ACL
In task-oriented dialogues, Natural Language Generation (NLG) is the final and crucial step to produce user-facing system utterances. The result of NLG is directly related to the perceived quality and usability of a dialogue system. While most existing systems provide semantically correct responses given goals to present, they struggle to match the variation and fluency in the human language. In this paper, we propose a novel multi-task learning framework, NLG-LM, for natural language generation. In addition to generating high-quality responses conveying the required information, it also explicitly targets for naturalness in generated responses via an unconditioned language model. This can significantly improve the learning of style and variation in human language. Empirical results show that this multi-task learning framework outperforms previous models across multiple datasets. For example, it improves the previous best BLEU score on the E2E-NLG dataset by 2.2%, and on the Laptop dataset by 6.1%.