{"id":735655,"date":"2021-02-11T07:01:33","date_gmt":"2021-02-11T15:01:33","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=735655"},"modified":"2021-03-23T07:27:10","modified_gmt":"2021-03-23T14:27:10","slug":"ai-advances-in-image-captioning-describing-images-as-well-as-people-do","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/ai-advances-in-image-captioning-describing-images-as-well-as-people-do\/","title":{"rendered":"AI advances in image captioning: Describing images as well as people do"},"content":{"rendered":"

Image captioning is an interesting problem in the intersection between computer vision and natural language processing, and it has attracted great attention from their respective research communities. Recent image captioning models have achieved impressive results on the tasks where large amounts of paired image-caption training data is available. However, they generalize poorly to images in the wild, where there are a wide variety of visual objects that are unseen in the caption corpora for training. This raises the challenge of Novel Object Captioning (NOC), that is, generating captions to describe novel objects unseen in paired image-caption training data, which is especially pertinent in real-world applications.<\/p>\n

This webinar will focus on some of the recent vision-language pretraining (VLP) approaches for image captioning. We will cover our latest approaches, including object-semantics aligned pretraining (OSCAR) and visual-vocabulary pretraining (VIVO). We will also discuss their key principles and how we address the core challenges in image caption generation. Join us to learn how our discovery leads to a new image captioning framework that achieves state-of-the-art performance on the nocaps benchmark (developed to evaluate NOC at scale) and surpasses human CIDEr scores on nocaps for the first time.<\/p>\n

Visual-vocabulary pretraining (VIVO) conducts pretraining with vision data only. As the method does not need paired image-caption data, it opens the possibility of leveraging large amounts of images, paired with either human-labeled or machine-generated tags. By using VIVO pretraining, the performance of the captioning model, especially on novel objects, has been substantially improved.<\/p>\n

What you\u2019ll learn:<\/p>\n