{"id":644109,"date":"2020-03-17T15:49:37","date_gmt":"2020-03-17T22:49:37","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=644109"},"modified":"2023-07-13T14:34:56","modified_gmt":"2023-07-13T21:34:56","slug":"vqa-introspect","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/vqa-introspect\/","title":{"rendered":"VQA Introspect"},"content":{"rendered":"

\"\"<\/p>\n

Existing VQA datasets contain questions with varying levels of complexity. While the majority of questions in these datasets require perception for recognizing existence, properties, and spatial relationships of entities, a significant portion of questions pose challenges that correspond to reasoning tasks — tasks that can only be answered through a synthesis of perception and knowledge about the world, logic and \/ or reasoning.\u00a0 This distinction allows us to notice when existing VQA models have consistency issues — they answer the reasoning question correctly but fail on associated low-level perception questions. For example, in the figure above, models answer the complex reasoning question “Is the banana ripe enough to eat?” correctly, but fail on the associated perception question “Are the bananas mostly green or yellow?” indicating that the model likely answered the reasoning question correctly but for the wrong reason.
\nWe quantify the extent to which this phenomenon occurs by creating a new Reasoning split of the VQA dataset and collecting VQA-Introspect<\/a>, a new dataset consisting of 200K new perception questions which serve as sub questions corresponding to the set of perceptual tasks needed to effectively answer the complex reasoning questions in the Reasoning split. Additionally, we propose an approach called Sub-Question Importance-aware Network Tuning (SQuINT), which encourages the model to attend do the same parts of the image when answering the reasoning question and the perception sub questions. We show that SQuINT improves model consistency by ~5%, also marginally improving its performance on the Reasoning questions in VQA, while also displaying qualitatively better attention maps.<\/p>\n

All data is available for download<\/a>.<\/p>\n

Contacts<\/strong><\/p>\n

Ramprasaath Ramasamy Selvaraju (rselvaraju [at] salesforce [dot] com)<\/p>\n

Ece Kamar (eckamar [at] microsoft [dot] com)<\/p>\n","protected":false},"excerpt":{"rendered":"

This projects rethinks the way how VQA models need to trained by proposing to link reasoning questions with simpler, sub questions that are required to learn and solve complex tasks. The work introduces a new dataset, VQA-Introspect with sub questions as consistency checks and a learning method that leverages the dataset to improve reasoning capabilities of current models.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-644109","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[664902],"related-downloads":[902523],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[{"id":0,"name":"Dataset","content":"All data is available for download on MSR Open Data<\/a>.\r\n\r\nContacts<\/strong>\r\n\r\nRamprasaath Ramasamy Selvaraju (rselvaraju [at] salesforce [dot] com)\r\n\r\nEce Kamar (eckamar [at] microsoft [dot] com)"}],"slides":[],"related-researchers":[{"type":"guest","display_name":"Ramprasaath R. Selvaraju","user_id":644118,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Purva Tendulkar","user_id":644124,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Devi Parikh","user_id":644130,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Eric Horvitz","user_id":32033,"people_section":"Section name 0","alias":"horvitz"},{"type":"user_nicename","display_name":"Besmira Nushi","user_id":36975,"people_section":"Section name 0","alias":"benushi"},{"type":"user_nicename","display_name":"Ece Kamar","user_id":31710,"people_section":"Section name 0","alias":"eckamar"}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/644109"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":11,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/644109\/revisions"}],"predecessor-version":[{"id":955203,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/644109\/revisions\/955203"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=644109"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=644109"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=644109"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=644109"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=644109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}