{"id":604773,"date":"2019-08-29T10:41:21","date_gmt":"2019-08-29T17:41:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=604773"},"modified":"2019-08-29T10:54:20","modified_gmt":"2019-08-29T17:54:20","slug":"microsoft-icecaps-an-open-source-toolkit-for-conversation-modeling","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-icecaps-an-open-source-toolkit-for-conversation-modeling\/","title":{"rendered":"Microsoft Icecaps: An open-source toolkit for conversation modeling"},"content":{"rendered":"

How we act, including how we speak, is more often than not determined by the situation we find ourselves in. We wouldn\u2019t necessarily use the same tone and language with friends during a night out bowling as we would with colleagues during an office meeting. We tailor dialogue to appropriately fit the scenario. If trained conversational agents are to continue evolving into dependable resources people can turn to for assistance, they\u2019ll need to be trained to do the same.<\/p>\n

Today, we\u2019re excited to make available the Intelligent Conversation Engine: Code and Pre-trained Systems, or Microsoft Icecaps<\/span> (opens in new tab)<\/span><\/a>, a new open-source toolkit that not only allows researchers and developers to imbue their chatbots with different personas, but also to incorporate other natural language processing features that emphasize conversation modeling.<\/p>\n

Icecaps<\/span> provides an array of capabilities from recent conversation modeling literature. Several of these tools were driven by recent work done here at Microsoft Research, including personalization embeddings, maximum mutual information\u2013based decoding, knowledge grounding, and an approach for enforcing more structure on shared feature representations to encourage more diverse and relevant responses. Our library leverages TensorFlow in a modular framework designed to make it easy for users to construct sophisticated training configurations using multi-task learning. In the coming months, we\u2019ll equip Icecaps<\/span> with pre-trained conversational models that researchers and developers can either use directly out of the box or quickly adapt to new scenarios by bootstrapping their own systems.<\/p>\n

Multi-task learning and SpaceFusion<\/h3>\n

At Icecaps<\/span>\u2019 core is a flexible multi-task learning paradigm. In multi-task learning, a subset of parameters is shared among multiple tasks so those tasks can make use of shared feature representations. For example, this technique has been used in conversational modeling to combine general conversational data with unpaired utterances (opens in new tab)<\/span><\/a>; by pairing a conversational model with an autoencoder that shares its decoder, one can use the unpaired data to personalize the conversational model. Icecaps<\/span> enables multi-task learning by representing most models as chains of components and allowing researchers and developers to build arbitrarily complex configurations of models with shared components. Flexible multi-task training schedules are also supported, allowing users to alter how tasks are weighted over the course of training.<\/p>\n

\"In

(opens in new tab)<\/span><\/a> In a multi-task learning environment, paired and unpaired data can be combined during training.<\/p><\/div>\n

Icecaps<\/span> additionally implements SpaceFusion (opens in new tab)<\/span><\/a>, a specialized multi-task learning paradigm originally designed to jointly optimize for diversity and relevance of generated responses. SpaceFusion adds regularization terms to shape the latent space shared among tasks. These terms better align the distributions learned by each task over this latent space.<\/p>\n

\"SpaceFusion

(opens in new tab)<\/span><\/a> SpaceFusion adds regularization terms to a multi-task learning environment, imposing structure upon the shared latent space to improve efficiency.<\/p><\/div>\n

Personalization<\/h3>\n

To achieve personalization in conversational scenarios where an AI may be required to adopt some persona with its own particular style and attributes, Icecaps<\/span> allows researchers and developers to train multi-persona conversation systems on multi-speaker data using personality embeddings (opens in new tab)<\/span><\/a>. Personality embeddings work similarly to word embeddings; just as we learn an embedding for each word to describe how words relate to each other within a latent word space, we can learn an embedding per speaker from a multi-speaker dataset to describe a latent personality space. Multi-persona encoder-decoder models provide the decoder a personality embedding alongside word embeddings to condition the decoded response on the selected personality.<\/p>\n

\"By

(opens in new tab)<\/span><\/a> By combining a word embedding space with a persona embedding space, personalized sequence-to-sequence models enable personalized response generation.<\/p><\/div>\n

MMI-based decoding<\/h3>\n

Conversational systems trained on noisy real-world data tend to produce nonspecific, bland responses such as \u201cI don\u2019t know what you\u2019re talking about.\u201d These systems learn this behavior as a safe way to consistently produce context-appropriate responses. The cost is response diversity and content. One method to tackle this issue is hypothesis reranking based on maximum mutual information (MMI) (opens in new tab)<\/span><\/a>. This approach trains a second model to predict the context given a potential response. This model assigns an additional score to each hypothesis generated by the base decoder, and this additional score is used to rerank the set of hypotheses. MMI takes the potential responses most targeted toward the given context and pushes them to the top of the list. Icecaps<\/span> incorporates MMI-based reranking, among several other decoding features, as part of its custom beam search decoder.<\/p>\n

Knowledge grounding<\/h3>\n

One of the major bottlenecks in training conversational systems is a lack of conversational data that captures the richness of information present in the abundance of non-conversational data that exists in the world. We therefore need good tools that can take advantage of the latter. To train an intelligent agent endowed with all the knowledge contained within Wikipedia or other encyclopedic sources, for instance, Icecaps<\/span> implements an approach to knowledge-grounded conversation that combines machine reading comprehension and response generation modules (opens in new tab)<\/span><\/a>. The model uses attention to isolate content from the knowledge source relevant to the context, allowing the model to produce more informed responses.<\/p>\n

\"Cross-attention

(opens in new tab)<\/span><\/a> Cross-attention can be used to extract pertinent information from an external knowledge base for shaping generated responses.<\/p><\/div>\n

Follow us!<\/h3>\n

Follow our GitHub page (opens in new tab)<\/span><\/a>! You will receive updates as we add pre-trained systems, new natural language processing features, and tutorials. Informed personalized chatbots are only the beginning for conversational modeling; promising new areas of research include content filtering, multi-lingual modeling, and hybridizing conversational and task-oriented capabilities. We care about advancing the field of conversational modeling, and with Icecaps<\/span>, our goal is to empower researchers and developers to push the cutting edge.<\/p>\n","protected":false},"excerpt":{"rendered":"

How we act, including how we speak, is more often than not determined by the situation we find ourselves in. We wouldn\u2019t necessarily use the same tone and language with friends during a night out bowling as we would with colleagues during an office meeting. We tailor dialogue to appropriately fit the scenario. If trained […]<\/p>\n","protected":false},"author":38022,"featured_media":606249,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"categories":[243622],"tags":[],"research-area":[13545],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-604773","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-human-language-technologies","msr-research-area-human-language-technologies","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[604608],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"\"\"","byline":"Vighnesh Leonardo Shiv","formattedDate":"August 29, 2019","formattedExcerpt":"How we act, including how we speak, is more often than not determined by the situation we find ourselves in. We wouldn\u2019t necessarily use the same tone and language with friends during a night out bowling as we would with colleagues during an office meeting.…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/604773"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38022"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=604773"}],"version-history":[{"count":19,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/604773\/revisions"}],"predecessor-version":[{"id":606177,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/604773\/revisions\/606177"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/606249"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=604773"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=604773"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=604773"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=604773"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=604773"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=604773"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=604773"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=604773"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=604773"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=604773"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=604773"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}