Benchmarking Meaning Representations in Neural Semantic Parsing
- Jiaqi GUO ,
- Qian LIU ,
- Jian-Guang Lou ,
- Zhenwen LI ,
- Susan Xueqing LIU ,
- Tao XIE ,
- Ting LIU
EMNLP'20 (full paper) |
Meaning representation is an important component of semantic parsing. Although researchers have designed a lot of meaning representations, recently proposed approaches are evaluated on only a few of them, and their impact on semantic parsing performance is not systematically studied. In addition, existing approaches are usually evaluated with the exact-match metric solely (partially due to the lack of a ready-to-use execution engine), underestimating their performance. In this work, we argue that it is important to create a benchmark for researchers to more comprehensively evaluate the performance of their approaches and to more fairly compare their approaches with previous work. To this end, we provide a unified benchmark that covers three domains and includes four different meaning representations along with their annotated logical forms and execution engines. We conduct a thorough experimental study on the benchmark for revealing the impact of different meaning representations on the semantic parsing performance. By open sourcing our benchmark and source code of the study, we believe that our work can provide fertile soil for exploring meaning representations in semantic parsing.