REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering

  • Yuanze Lin ,
  • Yujia Xie ,
  • ,
  • Yichong Xu ,
  • Chenguang Zhu ,
  • Lu Yuan

Conference on Neural Information Processing Systems (NeurIPS) 2022 |

Publication

This paper revisits visual representation in knowledge-based visual question answering (VQA) and demonstrates that using regional information in a better way can significantly improve the performance. While visual representation is extensively studied in traditional VQA, it is under-explored in knowledge-based VQA even though these two tasks share the common spirit, i.e., rely on visual input to answer the question. Specifically, we observe that in most state-of-the-art knowledge-based VQA methods: 1) visual features are extracted either from the whole image or in a sliding window manner for retrieving knowledge, and the important relationship within/among object regions is neglected; 2) visual features are not well utilized in the final answering model, which is counter-intuitive to some extent. Based on these observations, we propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions not only in the knowledge retrieval stage but also in the answering model. The key motivation is that object regions and inherent relationships are important for knowledge-based VQA. We perform extensive experiments on the standard OK-VQA dataset and achieve new state-of-the-art performance, i.e., 58.0 state-of-the-art method by a large margin (+3.6 analysis and show the necessity of regional information in different framework components for knowledge-based VQA.