Constrained Language Models Yield Few-Shot Semantic Parsers
- Richard Shin ,
- Christopher H. Lin ,
- Sam Thomson ,
- Charles Chen ,
- Subhro Roy ,
- Emmanouil Antonios Platanios ,
- Adam Pauls ,
- Dan Klein ,
- Jason Eisner ,
- Ben Van Durme
EMNLP 2021 |
We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. With a small amount of data and very little code to convert into English-like representations, we provide a blueprint for rapidly bootstrapping semantic parsers and demonstrate good performance on multiple tasks.