{"id":487466,"date":"2019-01-14T09:43:36","date_gmt":"2019-01-14T17:43:36","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=487466"},"modified":"2023-03-29T19:31:38","modified_gmt":"2023-03-30T02:31:38","slug":"figureqa-dataset","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/figureqa-dataset\/","title":{"rendered":"FigureQA Dataset"},"content":{"rendered":"
Answering questions about a given image is a difficult task, requiring both an understanding of the image and the accompanying query. Microsoft research Montreal’s FigureQA<\/strong> dataset introduces a new visual reasoning task for research, specific to graphical plots and figures. The task comes with an additional twist: all of the questions are relational, requiring the comparison of several or all elements of the underlying plot.<\/p>\n Images are comprised on five types of figures commonly found in analytical documents. Fifteen question types were selected for the dataset concerning quantitative attributes in relational global<\/strong> and one-vs-one<\/strong> contexts. These include properties like minimum and maximum, greater and less than, medians, curve roughness, and area under the curve (AUC). All questions in the training and validation sets have either a yes or no answer.<\/p>\n For more details concerning the task, dataset, and our experiments, please read our paper: FigureQA: An Annotated Figure Dataset for Visual Reasoning (opens in new tab)<\/span><\/a>.<\/p>\n Answering questions about a given image is a difficult task, requiring both an understanding of the image and the accompanying query. Microsoft research Montreal’s FigureQA dataset introduces a new visual reasoning task for research, specific to graphical plots and figures. The task comes with an additional twist: all of the questions are relational, requiring the […]<\/p>\n","protected":false},"featured_media":487478,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-487466","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[{"id":0,"name":"Highlights","content":"Click on a figure below to enlarge it and see some of its questions, answers, and bounding boxes.<\/h3>\n
<\/h3>\n
<\/a>\n\t\t\t\t
<\/a>\n\t\t\t\t
<\/a>\n\t\t\t\t
<\/a>\n\t\t\t\t
<\/a>\n\t\t\t\t
\n\t\t<\/ul>\n\n","protected":false},"excerpt":{"rendered":"Highlights<\/h3>\r\n[row]\r\n[column class=\"l-col-6-24\"]\r\n
Details<\/h3>\r\n