{"id":587071,"date":"2019-05-16T09:02:21","date_gmt":"2019-05-16T16:02:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=587071"},"modified":"2019-05-16T09:16:37","modified_gmt":"2019-05-16T16:16:37","slug":"robust-language-representation-learning-via-multi-task-knowledge-distillation","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/robust-language-representation-learning-via-multi-task-knowledge-distillation\/","title":{"rendered":"Robust Language Representation Learning via Multi-task Knowledge Distillation"},"content":{"rendered":"

Language Representation Learning maps symbolic natural language texts (for example, words, phrases and sentences) to semantic vectors. Robust and universal language representations are crucial to achieving state-of-the-art results on many Natural Language Processing (NLP) tasks.<\/p>\n

Ensemble learning is one of the most effective approaches for improving model generalization and has been used to achieve new state-of-the-art results in a wide range of natural language understanding (NLU) tasks. However, ensemble learning typically consists of tens or even hundreds of different deep neural network (DNN) models and is prohibitively expensive to deploy due to the computational cost of runtime inference.<\/p>\n

\"Figure<\/a>

Figure 1: Architecture of the MT-DNN student model.<\/p><\/div>\n

In \u201cImproving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding<\/a>\u201d, researchers Xiaodong Liu and Jianfeng Gao of Microsoft Research and Pengcheng He and Weizhu Chen of Microsoft Dynamics 365 AI compressed multiple ensembled models into a single Multi-Task Deep Neural Network (MT-DNN) via knowledge distillation for learning robust and universal text representations across multiple NLU tasks. The ensemble model was utilized, in an offline manner, to produce a set of soft targets<\/em> for each task in the training dataset. These generated soft targets provide more information per training sample than the hard targets and less variance in the gradient between training samples. For example, the sentiment of the sentence \u201cI really enjoyed the conversation with Tom\u201d has a small chance of being classified as negative. But the sentence \u201cTom and I had an interesting conversation\u201d can be either positive or negative, depending on its context, if available, leading to a high entropy of the soft targets assigned by the teacher. Then, a single MT-DNN (student) was trained via multi-task learning with the help of the teachers by using both the soft targets and correct targets across different tasks. Empirical experiments show that the distilled MT-DNN outperforms the original MT-DNN and obtained a new state-of-the-art result on the General Language Understanding Evaluation (GLUE) benchmark.<\/p>\n

The MT-DNN student model<\/h3>\n

The student model is a MT-DNN model proposed by Microsoft in 2019 that incorporates a pre-trained bidirectional transformer language model, known as BERT, developed by Google AI. Figure 1 illustrates the architecture of the student model. The lower layers are shared across all tasks while the top layers are task-specific. The input X<\/em> (either a sentence or a set of sentences) is first represented as a sequence of embedding vectors, one for each word, in l1<\/sub><\/em>. Then the Transformer encoder captures the contextual information for each word and generates the shared contextual embedding vectors in l2<\/sub><\/em>. Finally, for each task, additional task-specific layers generate task-specific representations, followed by operations necessary for classification, similarity scoring, or relevance ranking. The shared layers of MT-DNN are initialized by using BERT, then trained jointly via multi-task learning (MTL).<\/p>\n

Multi-task knowledge distillation<\/h3>\n

The process of knowledge distillation for MTL is illustrated in Figure 2. A set of tasks where there is task-specific labeled training data is picked. Then, for each task, an ensemble of different neural nets (teacher) is trained. The teacher is used to generate a set of soft targets for each task-specific training sample. Given the soft targets of the training datasets across multiple tasks, a single MT-DNN (student) is trained using multi-task learning, except that if task $t$ has a teacher, the task-specific loss is the average of two objective functions\u2014one for the correct targets and the other for the soft targets assigned by the teacher.<\/p>\n

\"Figure<\/a>

Figure 2: Knowledge distillation for multi-task learning<\/p><\/div>\n

Results<\/h3>\n

The authors compared the distilled MT-DNN with BERT and MT-DNN. As shown in Figure 2, the MT-DNN outperforms BERT by a substantial margin, and the distilled MT-DNN further realizes a significant improvement over MT-DNN in terms of overall score on GLUE. On the GLUE diagnostics data, which is a manually-curated test set for the analysis of system performance, similar observations are found: both MT-DNN and distilled MT-DNN outperform BERT. All these demonstrate that language representation learned via distilled MT-DNN is more robust and universal than BERT\/MT-DNN. Detailed results can be found at the GLUE leaderboard<\/a>.<\/p>\n

\"Figure<\/a>

Figure 3: GLUE test results<\/p><\/div>\n

Release news<\/h3>\n

Microsoft plans to release the distilled MT-DNN package in June 2019 to the public at https:\/\/github.com\/namisan\/mt-dnn. The release package contains the pretrained models, the source code and the Readme that describes step by step how to reproduce the results reported in the paper. We welcome your comments and feedback and look forward to future developments!<\/p>\n

Acknowledgements<\/h3>\n

This research was conducted by Xiaodong Liu, Pengcheng He, Weizhu Chen and Jianfeng Gao. Additional thanks go to Asli Celikyilmaz, Xuedong Huang, Moontae Lee, Chunyuan Li, Xiujun Li, and Michael Patterson for their helpful discussions and comments.<\/p>\n","protected":false},"excerpt":{"rendered":"

Language Representation Learning maps symbolic natural language texts (for example, words, phrases and sentences) to semantic vectors. Robust and universal language representations are crucial to achieving state-of-the-art results on many Natural Language Processing (NLP) tasks. Ensemble learning is one of the most effective approaches for improving model generalization and has been used to achieve new […]<\/p>\n","protected":false},"author":38022,"featured_media":588010,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[243622],"tags":[],"research-area":[13545],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-587071","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-human-language-technologies","msr-research-area-human-language-technologies","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144736],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"\"\"","byline":"","formattedDate":"May 16, 2019","formattedExcerpt":"Language Representation Learning maps symbolic natural language texts (for example, words, phrases and sentences) to semantic vectors. Robust and universal language representations are crucial to achieving state-of-the-art results on many Natural Language Processing (NLP) tasks. Ensemble learning is one of the most effective approaches for…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/587071"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38022"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=587071"}],"version-history":[{"count":7,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/587071\/revisions"}],"predecessor-version":[{"id":588049,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/587071\/revisions\/588049"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/588010"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=587071"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=587071"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=587071"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=587071"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=587071"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=587071"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=587071"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=587071"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=587071"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=587071"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=587071"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}