{"id":696898,"date":"2020-10-14T08:02:27","date_gmt":"2020-10-14T15:02:27","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=696898"},"modified":"2020-10-14T10:44:49","modified_gmt":"2020-10-14T17:44:49","slug":"novel-object-captioning-surpasses-human-performance-on-benchmarks","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/novel-object-captioning-surpasses-human-performance-on-benchmarks\/","title":{"rendered":"Novel object captioning surpasses human performance on benchmarks"},"content":{"rendered":"\n
\"\"\/<\/figure>\n\n\n\n

Consider for a moment what it takes to visually identify and describe something to another person. Now imagine that the other person can\u2019t see the object or image, so every detail matters. How do you decide what information is important and what\u2019s not? You\u2019ll need to know exactly what everything is, where it is, what it\u2019s doing in relation to other objects, and note other attributes like color or position of objects in the foreground or background. This exercise shows there\u2019s no question that translating images into words is a complex task\u2014one humans do so often and innately it seems automatic at times\u2014requiring a wide range of knowledge about many unique things.<\/p>\n\n\n\n

In order to translate this skill into artificial intelligence (AI), we need to carefully consider and adapt models to the deep relationships between words and objects, the way they interrelate in expected and unexpected ways, and how contexts like environment and pose of an object affect the subtleties of associating and understanding new objects within categories. In AI, this means exploring new ways of training models, untethered to traditional annotation-reliant methods that require sentence-image pairs. To this aim, researchers from the Microsoft Azure Cognitive Services team and Microsoft Research have created VIVO (Visual Vocabulary Pretraining), an image-captioning milestone that performs pretraining in the absence of caption annotations and results in new state-of-the-art performance on novel object captioning.<\/p>\n\n\n\n

Refining vision and language pretraining for novel object captioning<\/h2>\n\n\n\n

Novel object captioning (NOC) aims to generate image captions capable of describing novel objects that are not present in the caption training data. NOC can add value to a variety of applications, such as human-computer interaction and image-language understanding. However, NOC is a challenging problem as it requires a visual system to recognize novel objects, and it also needs a language model to generate fluent sentences describing the objects.<\/p>\n\n\n\n

Recently, researchers have developed the novel object captioning challenge (nocaps) (opens in new tab)<\/span><\/a> to evaluate NOC. In this challenge, existing computer vision techniques can be leveraged to recognize novel objects. For example, prior studies have proposed generating template sentences that are filled in with the recognized visual concepts from object detectors (opens in new tab)<\/span><\/a>. However, the captioning capability is limited by object detection vocabulary, and the context of objects can hardly be well described by pre-defined templates.<\/p>\n\n\n\n

Vision and language pretraining (VLP) has shown to be effective for cross-modal representation learning. Prior works have explored training Transformer-based models on large amounts of image-sentence pairs. The learned cross-modal representations can be fine-tuned to improve the performance on image captioning, such as VLP <\/a>and OSCAR<\/a>. However, these prior works rely on large amounts of image-sentence pairs for pretraining. When it comes to the nocaps challenge, where no additional paired image-sentence training data is allowed, none of the prior VLP techniques are readily applicable. <\/p>\n\n\n\n

This blog post introduces VIVO, developed by the Microsoft Azure Cognitive Services (opens in new tab)<\/span><\/a> team and Microsoft Research, which performs pretraining in the absence of caption annotations. By breaking the dependency of paired image-sentence training data in VLP, VIVO can leverage large-scale vision datasets with image-tag pairs in pretraining to learn cross-modality alignment, building a rich visual vocabulary at scale. Our discovery leads to a new captioning framework that creates new state-of-the-art performance on the nocaps benchmark (opens in new tab)<\/span><\/a> and surpasses human performance for the first time.<\/p>\n\n\n\n

Please check out our paper, titled \u201cVIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training (opens in new tab)<\/span><\/a>,\u201d for more details, and gain further insight into the researchers\u2019 perspectives on how this breakthrough impacts caption generation in Azure AI and accessibility in this blog post (opens in new tab)<\/span><\/a> from The AI Blog. <\/p>\n\n\n\n

\"Three
Figure 1: VIVO pretraining uses paired image-tag data to learn a rich visual vocabulary where image region features and tags of the same object are aligned. Fine-tuning is conducted on paired image-sentence data that only cover a limited number of objects (in blue). During inference, our model can generalize to describe novel objects (in yellow) that are learned during VIVO pretraining.<\/figcaption><\/figure><\/div>\n\n\n\n

As shown in Figure 1, we define visual vocabulary as a joint embedding space where the image region features and tags of the semantically similar objects are mapped into feature vectors that are close to each other, for example \u201cperson\u201d and \u201cman\u201d or \u201caccordion\u201d and \u201cinstrument.\u201d Once the model is pretrained, a fine-tuning using image-caption pairs is conducted to learn caption generation. Note that the fine-tuning dataset only covers a subset of the most common objects in the learned visual vocabulary. Nevertheless, our model can still generalize to test images that contain a similar scene (like the people sitting on couches in Figure 1) with novel objects unseen in the fine-tuning dataset (like \u201caccordion\u201d), thanks to the visual vocabulary learned in the pretraining stage.<\/p>\n\n\n\n

Our VIVO pretraining learns to ground the image regions to the object tags. In fine-tuning, our model learns how to compose natural language captions. The combined skill achieves the compositionality generalization, allowing for zero-shot captioning on novel objects.<\/p>\n\n\n\n

\"(a)
Figure 2: The proposed training scheme. (a) In VIVO pretraining, we train a Transformer model on (image, tag) pairs for tag prediction, where it learns cross-modal representations for rich visual concepts. (b) In fine-tuning, we train the same model on limited (image, sentence) pairs to learn how to generate captions that are conditional on the image and tags. (c) During inference, given the image and detected tags, our model is applied iteratively to generate a sequence of words describing novel objects in an auto-regressive manner.<\/figcaption><\/figure><\/div>\n\n\n\n

Our training scheme consists of three main stages as shown in Figure 2. In pretraining, we feed to a multi-layer Transformer model, with the input consisting of the image region features and the paired image-tag set. In these sets of tags, single images can have multiple tags associated with them. We then randomly mask one or more tags, and we ask the model to predict these masked tags, conditioned on the image region features and the other tags. Given that tags are not ordered, we develop a Hungarian matching loss for tag prediction.<\/p>\n\n\n\n\n\t

\n\t\t\n\n\t\t

\n\t\tSpotlight: Blog post<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"Research\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Research Focus: Week of September 9, 2024<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tRead more\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n\n

After pretraining, the Transformer model is fine-tuned on a dataset where both captions and tags are available, like the COCO dataset (opens in new tab)<\/span><\/a> with 80 object classes and caption annotations. The tags can also come from prediction of a tagging or detection model.<\/p>\n\n\n\n

In the inference stage, given the input image and the detected tags, our model generates a set of word tokens in an auto-regressive manner to form the final output caption.<\/p>\n\n\n\n

State-of-the-art performance and exceeding human CIDEr scores<\/h2>\n\n\n\n
\"\"<\/figure><\/div>\n\n\n\n

We compare our method with UpDown (opens in new tab)<\/span><\/a>and OSCAR (opens in new tab)<\/span><\/a>, which represent the state of the art on nocaps benchmark. The training data for the baselines is the COCO dataset. Following prior settings, we also add the results after using SCST (opens in new tab)<\/span><\/a>and Constrained Beam Search (CBS) (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

The evaluation results on nocaps validation and test sets are shown in Table 1. Our method has achieved significant improvement compared with prior works. Our plain version (VIVO) already outperforms UpDown+ELMo+CBS and OSCAR by a large margin. Our results have achieved new state-of-the-art results and surpassed human CIDEr scores (opens in new tab)<\/span><\/a> on the overall dataset.<\/p>\n\n\n\n

Visual-text alignment: precise localization of novel objects<\/h2>\n\n\n\n
\"\"
Figure 3: Image captioning results on nocaps. B: our baseline without adding VIVO pretraining. V: our approach with VIVO pretraining. Red text represents novel objects. For each image, we show the similarity scores of each image region to the novel objects appear in the captions. The bounding box color is brighter when the similarity is higher.<\/figcaption><\/figure><\/div>\n\n\n\n

To further understand the effects of VIVO pretraining in learning visual vocabulary, that is aligning image regions with object tags, we show how the novel object tags can be grounded to image regions. We estimate the similarity between the representations of each image region and object tag pair. We highlight the pairs with high scores in Figure 3. The results show that our model can precisely localize these novel objects.<\/p>\n\n\n\n

Looking forward: High potential for performance improvements<\/h2>\n\n\n\n

We have demonstrated the power of learning visual vocabulary for novel object captioning. As the first VLP method that does not rely on paired image-sentence data, VIVO can leverage a large-scale vision dataset with image-tag pairs in pretraining. It is worth noting that using machine-generated image tags rather than human-written captions makes it possible to utilize potentially unlimited training images for improving the performance, which we will pursue in our future work.<\/p>\n\n\n\n

We will have more updates in the coming months. Please check out our project page<\/a> to learn more about our technology and future updates.<\/p>\n\n\n\n

This research was conducted by Xiaowei Hu<\/a>, Kevin Lin<\/a>, Lijuan Wang<\/a>, Lei Zhang<\/a>, Jianfeng Gao<\/a>, and Zicheng Liu<\/a> from the Microsoft Azure Cognitive Services team in collaboration with Microsoft Research. This research is part of our Azure Florence research initiative on vision and language<\/a>, sponsored by Microsoft Azure Cognitive Services.<\/p>\n\n\n\n


<\/p>\n","protected":false},"excerpt":{"rendered":"

Consider for a moment what it takes to visually identify and describe something to another person. Now imagine that the other person can\u2019t see the object or image, so every detail matters. How do you decide what information is important and what\u2019s not? You\u2019ll need to know exactly what everything is, where it is, what […]<\/p>\n","protected":false},"author":38838,"featured_media":697999,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-696898","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144931,737755],"related-projects":[689814,279642],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Kevin Lin","user_id":39694,"display_name":"Kevin Lin","author_link":"Kevin Lin<\/a>","is_active":false,"last_first":"Lin, Kevin","people_section":0,"alias":"keli"},{"type":"user_nicename","value":"Lijuan Wang","user_id":32680,"display_name":"Lijuan Wang","author_link":"Lijuan Wang<\/a>","is_active":false,"last_first":"Wang, Lijuan","people_section":0,"alias":"lijuanw"}],"msr_type":"Post","featured_image_thumbnail":"\"\"","byline":"Kevin Lin<\/a>, Xiaowei Hu, and Lijuan Wang<\/a>","formattedDate":"October 14, 2020","formattedExcerpt":"Consider for a moment what it takes to visually identify and describe something to another person. Now imagine that the other person can\u2019t see the object or image, so every detail matters. How do you decide what information is important and what\u2019s not? You\u2019ll need…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/696898"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38838"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=696898"}],"version-history":[{"count":10,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/696898\/revisions"}],"predecessor-version":[{"id":698038,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/696898\/revisions\/698038"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/697999"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=696898"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=696898"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=696898"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=696898"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=696898"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=696898"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=696898"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=696898"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=696898"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=696898"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=696898"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}