{"id":659091,"date":"2020-05-14T11:07:58","date_gmt":"2020-05-14T18:07:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=659091"},"modified":"2020-05-14T11:08:58","modified_gmt":"2020-05-14T18:08:58","slug":"robust-natural-language-inference-models-with-example-forgetting","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/robust-natural-language-inference-models-with-example-forgetting\/","title":{"rendered":"Robust Natural Language Inference Models with Example Forgetting"},"content":{"rendered":"
We investigate whether example forgetting, a recently introduced measure of hardness of examples, can be used to select training examples in order to increase robustness of natural language understanding models in a natural language inference task (MNLI). We analyze forgetting events for MNLI and provide evidence that forgettable examples under simpler models can be used to increase robustness of the recently proposed BERT model, measured by testing an MNLI trained model on HANS, a curated test set that exhibits a shift in distribution compared to the MNLI test set. Moreover, we show that, the “large” version of BERT is more robust than its “base” version but its robustness can still be improved with our approach.<\/p>\n