{"id":495725,"date":"2018-07-18T14:24:30","date_gmt":"2018-07-18T21:24:30","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=495725"},"modified":"2018-10-16T22:22:51","modified_gmt":"2018-10-17T05:22:51","slug":"natural-language-to-structured-query-generation-via-meta-learning","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/natural-language-to-structured-query-generation-via-meta-learning\/","title":{"rendered":"Natural Language to Structured Query Generation via Meta-Learning"},"content":{"rendered":"
In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function. When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.<\/p>\n