Hybrid Code Networks: Practical and Efficient End-To-End Dialog Control With Supervised and Reinforcement Learning

  • Jason Williams ,
  • Kavosh Asadi ,
  • Geoffrey Zweig

Proceedings of 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017) |

Published by Association for Computational Linguistics

End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors. We introduce Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge encoded as software and system action templates. Compared to existing end-to-end approaches, HCNs considerably reduce the amount of training data required, while retaining the key benefit of inferring a latent representation of dialog state. In addition, HCNs can be optimized with supervised learning, reinforcement learning, or a mixture of both. HCNs attain state-of-the-art performance on the bAbI dialog dataset, and outperform two commercially deployed customer-facing dialog systems.