{"id":487466,"date":"2019-01-14T09:43:36","date_gmt":"2019-01-14T17:43:36","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=487466"},"modified":"2023-03-29T19:31:38","modified_gmt":"2023-03-30T02:31:38","slug":"figureqa-dataset","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/figureqa-dataset\/","title":{"rendered":"FigureQA Dataset"},"content":{"rendered":"

Answering questions about a given image is a difficult task, requiring both an understanding of the image and the accompanying query. Microsoft research Montreal’s FigureQA<\/strong> dataset introduces a new visual reasoning task for research, specific to graphical plots and figures. The task comes with an additional twist: all of the questions are relational, requiring the comparison of several or all elements of the underlying plot.<\/p>\n

Images are comprised on five types of figures commonly found in analytical documents. Fifteen question types were selected for the dataset concerning quantitative attributes in relational global<\/strong> and one-vs-one<\/strong> contexts. These include properties like minimum and maximum, greater and less than, medians, curve roughness, and area under the curve (AUC). All questions in the training and validation sets have either a yes or no answer.<\/p>\n

For more details concerning the task, dataset, and our experiments, please read our paper: FigureQA: An Annotated Figure Dataset for Visual Reasoning (opens in new tab)<\/span><\/a>.<\/p>\n

Click on a figure below to enlarge it and see some of its questions, answers, and bounding boxes.<\/h3>\n

<\/h3>\n