{"id":775513,"date":"2021-09-01T10:28:51","date_gmt":"2021-09-01T17:28:51","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=775513"},"modified":"2021-10-23T17:43:22","modified_gmt":"2021-10-24T00:43:22","slug":"learning-commonsense-understanding-through-language-and-vision","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/learning-commonsense-understanding-through-language-and-vision\/","title":{"rendered":"Learning Commonsense Understanding through Language and Vision"},"content":{"rendered":"

As humans, we parse language and visual scenes — often together — into a rich understanding of what is going on in the world. Given even a still image or a sentence describing an event, like “my friends eating at a restaurant”, we can infer who’s doing what, where they’re at, and what might happen next. Though existing models seem strong at tasks involving language or vision, these models often struggle at combining these two modalities into such a unified commonsense understanding.<\/p>\n

In this talk, I will cover two recent works that seek to help bridge this gap. First, I’ll introduce a model named PIGLeT that learns physical commonsense understanding by interacting with the world in simulation, and uses this knowledge to ground language. PIGLeT learns linguistic form and meaning, together, and outperforms text-to-text only models that are orders of magnitude larger. I’ll then introduce a model named MERLOT, which learns multimodal script knowledge by watching millions of YouTube videos with transcribed speech. MERLOT learns the layered inferences that go beyond recognition at the level of individual scenes, and towards cognition-level reasoning that understands what is happening globally over time; it gets SOTA performance on 12 vision-and-language datasets.<\/p>\n

I’ll conclude with a sketch of future directions for how we can better learn multimodal commonsense understanding, as well as on the social impacts of this work.<\/p>\n

\n\t\n\t\tView slides\t<\/a>\n\n\t<\/div>\n","protected":false},"excerpt":{"rendered":"

As humans, we parse language and visual scenes — often together — into a rich understanding of what is going on in the world. Given even a still image or a sentence describing an event, like “my friends eating at a restaurant”, we can infer who’s doing what, where they’re at, and what might happen […]<\/p>\n","protected":false},"featured_media":775516,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-video-type":[259633],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-775513","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-video-type-vision-language-summer-talk-series","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/QYoG-usbD_0","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/775513"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/775513\/revisions"}],"predecessor-version":[{"id":787750,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/775513\/revisions\/787750"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/775516"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=775513"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=775513"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=775513"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=775513"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=775513"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=775513"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=775513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}