{"id":847462,"date":"2022-05-25T01:10:22","date_gmt":"2022-05-25T08:10:22","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=847462"},"modified":"2022-12-14T11:21:42","modified_gmt":"2022-12-14T19:21:42","slug":"godel","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/godel\/","title":{"rendered":"GODEL: Large-Scale Pre-training for Goal-Directed Dialog"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"Clipart\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

GODEL: Large-Scale Pre-training for Goal-Directed Dialog<\/h1>\n\n\n\n

<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

This is the home page of project GODEL (G<\/strong>rounded O<\/strong>pen D<\/strong>ialogue<\/strong> L<\/strong>anguage Model), a large open-source<\/strong> pre-trained language model for dialog. In contrast with its predecessor DialoGPT (opens in new tab)<\/span><\/a>, GODEL leverages a new phase of grounded<\/em> pretraining designed to better support finetuning phases that require information external to the current conversation (e.g., a database or document) to produce good responses. Experiments against a benchmark suite combining task-oriented dialog, conversational QA, and grounded open-domain dialog show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot finetuning setups, in terms of both human and automatic evaluation. A novel feature of the evaluation methodology in GODEL is the introduction of a notion of utility that assesses the usefulness<\/em> of responses (extrinsic evaluation) in addition to their communicative features (intrinsic evaluation). We show that extrinsic evaluation offers improved inter-annotator agreement and correlation with automated metrics. More information about this work can be found in the paper \u201cGODEL: Large-Scale Pre-training for Goal-Directed Dialog (opens in new tab)<\/span><\/a>.\u201d

The code and models of GODEL are available on 
GitHub (opens in new tab)<\/span><\/a>, with three model sizes currently available: base, large, and extra-large. We will post information about new releases and papers on GODEL on this project page.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"

This is the home page of project GODEL (Grounded Open Dialogue Language Model), a large open-source pre-trained language model for dialog. In contrast with its predecessor DialoGPT (opens in new tab), GODEL leverages a new phase of grounded pretraining designed to better support finetuning phases that require information external to the current conversation (e.g., a database or […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13556,13545],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-847462","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2022-05-25","related-publications":[847483,858045],"related-downloads":[871794],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Chris Brockett","user_id":31423,"people_section":"People","alias":"chrisbkt"},{"type":"user_nicename","display_name":"Bill Dolan","user_id":31229,"people_section":"People","alias":"billdol"},{"type":"user_nicename","display_name":"Michel Galley","user_id":32887,"people_section":"People","alias":"mgalley"},{"type":"user_nicename","display_name":"Jianfeng Gao","user_id":32246,"people_section":"People","alias":"jfgao"},{"type":"user_nicename","display_name":"Lars Liden","user_id":32612,"people_section":"People","alias":"laliden"},{"type":"guest","display_name":"Zhou Yu","user_id":852975,"people_section":"People","alias":""}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/847462"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":13,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/847462\/revisions"}],"predecessor-version":[{"id":854928,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/847462\/revisions\/854928"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=847462"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=847462"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=847462"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=847462"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=847462"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}