{"id":791159,"date":"2021-11-01T11:07:56","date_gmt":"2021-11-01T18:07:56","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=791159"},"modified":"2022-02-01T11:47:52","modified_gmt":"2022-02-01T19:47:52","slug":"turing-bletchley-a-universal-image-language-representation-model-by-microsoft","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/turing-bletchley-a-universal-image-language-representation-model-by-microsoft\/","title":{"rendered":"Turing Bletchley: A Universal Image Language Representation model by Microsoft"},"content":{"rendered":"\n
\"An<\/figure>\n\n\n\n

Today, the Microsoft Turing team (opens in new tab)<\/span><\/a> is thrilled to introduce Turing Bletchley, a 2.5-billion parameter Universal Image Language Representation model (T-UILR) that can perform image-language tasks in 94 languages. T-Bletchley has an image encoder and a universal language encoder that vectorize input image and text respectively so that semantically similar images and texts align with each other. This model shows uniquely powerful capabilities and a groundbreaking advancement in image language understanding.<\/p>\n\n\n\n

T-Bletchley outperforms state-of-the-art models, like Google\u2019s ALIGN (opens in new tab)<\/span><\/a>, on English image-language data sets (ImageNet, CIFAR, and COCO), and outperforms MULE, (opens in new tab)<\/span><\/a> SMALR, (opens in new tab)<\/span><\/a> and M3<\/sup>P (opens in new tab)<\/span><\/a> on universal image language data sets (Multi30k and COCO). To see T-Bletchley in action navigate to the demo (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

Significance of multi-modal and universal<\/h2>\n\n\n\n
\"Three
Image showing \u201ca beautiful sunset on the beach\u201d<\/figcaption><\/figure><\/div>\n\n\n\n

Language and vision are inherently linked. When we hear the statement \u201ca beautiful sunset on the beach\u201d we imagine an image similar to the one above. Models that focus only on language fail to capture this link. To these models, sentences are no more than a grammatically correct sequence of words.<\/p>\n\n\n\n

Furthermore, vision is a global modality. The same sight of the beach sunset can be narrated in any language (\u201cuna hermosa puesta de sol en la playa\u201d<\/em>, \u201cun beau coucher de soleil sur la plage\u201d<\/em>, \u201cMatahari terbenam yang indah di pantai\u201d<\/em>, etc.), and it would not change the corresponding visual representation. Traditional multi-modal models tie vision to a particular language (most commonly English) and therefore fail to capture this universal property of vision.<\/p>\n\n\n\n

With T-Bletchley, we address both these shortcomings. We take a multi-modal approach that advances a computer\u2019s ability to understand language as well as understand images natively, just from pixels. Additionally, we consider language modality with a universal-first approach when developing the model. The result is a one-of-a-kind universal multi-modal model that understands images and text across 94 different languages, resulting in some impressive capabilities. For example, by utilizing a common image-language vector space, without using any metadata or extra information like surrounding text, T-Bletchley can retrieve images that match a text description provided in any language. It can also find images that answer text-based questions in any language, or images that are semantically like another image.<\/p>\n\n\n\n\n\t

\n\t\t\n\n\t\t

\n\t\tSpotlight: AI-POWERED EXPERIENCE<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"\"\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Microsoft research copilot experience<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Discover more about research at Microsoft through our AI-powered experience<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tStart now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n\n

T-Bletchley in action<\/h2>\n\n\n\n

To test the capabilities of T-Bletchley, we built an image retrieval system consisting of 30 million randomly sampled images from the web that were unseen by the model during training. The images \u2013 without any captions, alt-text or other forms of text metadata \u2013 were encoded by the image encoder and stored in an index (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

We built two types of retrieval systems \u2013 text-to-image and image-to-image. We vectorized the input queries (for text-to-image with the text encoder and for image-to-image with the image encoder). We use the encoded vector as key and query the index to find its nearest neighbors in the vector space using the approximate nearest neighbor (ANN) algorithm HNSW. The nearest neighbors are then displayed as the image retrieval results.<\/p>\n\n\n\n

Today\u2019s image retrieval systems depend heavily on text metadata that is available for images, e.g., image captions, alt-text, surrounding text, image URL, etc. T-Bletchley is unique in that the system can do image retrieval from just the encoded image vectors and does not use any text metadata. This is a big step in true image understanding as compared to today\u2019s systems. Moreover, the demo was built directly with the pre-trained model and not finetuned with any image retrieval task.<\/p>\n\n\n\n

In addition, today\u2019s image retrieval systems also use object tagging algorithms applied to images which augment the text metadata (i.e., add tags like car, house, beach, etc. generated from the image). Since the object tagging systems are trained by human-labeled data, the number of classes (tags) is extremely limited. T-Bletchley is trained with unsupervised data, and as a result, it understands a very large number of objects, actions, and many other concepts (dancing, programming, racing, etc.) of the real world.<\/p>\n\n\n\n

Below are some examples that showcase the capabilities of T-Bletchley in an image retrieval system.<\/p>\n\n\n\n

Universal text-to-image retrieval<\/h2>\n\n\n\n

Below are examples of images retrieved using text-based queries in multiple languages:<\/p>\n\n\n\n

\"Three<\/figure>\n\n\n\n

The third example shows that T-Bletchley “understands\u201d the act of programming and has carved out a vector subspace dedicated solely for images of cats programming. True image understanding can be used to improve current retrieval systems to place a greater weight on the image itself.<\/p>\n\n\n\n

Code-switched retrieval<\/h2>\n\n\n\n
\"Two<\/figure>\n\n\n\n

T-Bletchley can even retrieve images from non-English language queries written with English script!<\/p>\n\n\n\n

\"Two<\/figure>\n\n\n\n

T-Bletchley can understand sentences containing multiple languages and scripts:<\/p>\n\n\n\n

\"A<\/figure>\n\n\n\n

Image-to-image retrieval<\/h2>\n\n\n\n

To evaluate image retrieval, we encode the given image using the image encoder and retrieve the closest image vectors and corresponding images from the index. Because T-Bletchley was trained to pick the best caption for an image, it tends to prefer semantically similar images instead of visually similar ones.<\/p>\n\n\n\n

\"An<\/figure>\n\n\n\n

The images retrieved by T-Bletchley are not necessarily similar in appearance to the query image. However, the images, all of the same geography, are \u2018semantically similar.\u2019 T-Bletchley does not return the following images from the retrieval set that look like the input image.<\/p>\n\n\n\n

\"Four<\/figure>\n\n\n\n

Understanding text within images<\/h2>\n\n\n\n

T-Bletchley is also able to understand text within images without the use of OCR technologies. In the following examples, images are directly passed to the image encoder and stored as 1024 dimensional vectors, and only the cosine similarity between these vectors is used to retrieve similar images.<\/p>\n\n\n\n

\"Three<\/figure>\n\n\n\n

In the first example, T-Bletchley understands that the text in the image is about the differences between microeconomics and macroeconomics and retrieves similar slides. In the second example, T-Bletchley retrieves images related to COVID-19 even though T-Bletchley’s training data pre-dates COVID-19.<\/p>\n\n\n\n

This capability is universal\u2014it can be used in multiple languages. The examples below show retrieval in French and Arabic.<\/p>\n\n\n\n

\"An<\/figure>\n\n\n\n

T-Bletchley: model development<\/h2>\n\n\n\n

Dataset<\/h3>\n\n\n\n

T-Bletchley was trained using billions of image-caption pairs drawn from the web.<\/p>\n\n\n\n

Examples of the dataset are depicted below.<\/p>\n\n\n\n

\"A<\/figure>\n\n\n\n

A large, diverse training dataset resulted in a robust model that can handle a wide variety of images. To achieve universality, we trained the model on a parallel corpus of 500 million translation pairs. These pairs were created by extracting sentences from document-aligned webpages from common crawl corpus. Adding the Translated Text Contrasted Task allowed us to create a language-agnostic vector representation of captions, which helped make the model much more universal.<\/p>\n\n\n\n

Model architecture & training<\/h2>\n\n\n\n

T-Bletchley consists of transformer-based image and text encoders which are both analogous to the BERT-large (opens in new tab)<\/span><\/a> architecture.<\/p>\n\n\n\n

\"A<\/figure>\n\n\n\n

Images and captions were independently encoded, and the model was then trained by applying a contrastive loss on the generated image and text vectors. Similarly, to create a language-agnostic representation, each sentence from a translation pair was independently encoded and a contrastive loss was applied over the resulting batch of vectors.<\/p>\n\n\n\n

\"An<\/figure>\n\n\n\n

In this way, despite the image caption pairs being predominantly in English, we managed to align captions in different languages with corresponding images.<\/p>\n\n\n\n

We leveraged the kernels in the DeepSpeed <\/a>library (compatible with PyTorch) for our transformer\u2019s implementation and the ZeRO optimizer<\/a> for training the model.<\/p>\n\n\n\n

In-depth model evaluation<\/h2>\n\n\n\n

T-Bletchley advances the state of the art across multiple public benchmarks.<\/p>\n\n\n\n

English<\/h3>\n\n\n\n

For this evaluation,\u00a0we followed\u00a0the prompt engineering and ensembling followed in Google\u2019s ALIGN (opens in new tab)<\/span><\/a> paper. T-Bletchley\u00a0outperforms\u00a0Google\u2019s ALIGN\u00a0model on English image-language benchmarks and sets a new state of the art standard in zero shot image classification, an area pioneered by OpenAI\u2019s CLIP model (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

Model<\/strong><\/td>ImageNet<\/strong><\/td>CIFAR-100<\/strong><\/td>CIFAR-10<\/strong><\/td>COCO R@1
image -> text <\/strong><\/td>
COCO R@1
text -> image<\/strong><\/td><\/tr>
ALIGN (opens in new tab)<\/span><\/a><\/td>76.4<\/td>– <\/td>–<\/td>58.6<\/td>45.6<\/td><\/tr>
T-Bletchley<\/td>79.0<\/td>83.5<\/td>97.7<\/td>59.1<\/td>43.3<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

When fine-tuned for retrieval, T-Bletchley outperforms ALIGN, the previous state of the art, by more than two points on the COCO test set. <\/p>\n\n\n\n

Model <\/strong><\/td>Flickr 30k Recall @1<\/strong>
image -> text         text->image <\/td>
COCO Recall @1<\/strong>
image ->text         text->image<\/td><\/tr>
OSCAR (opens in new tab)<\/span><\/a><\/td>– –<\/td>73.5         57.5<\/td><\/tr>
ALIGN (opens in new tab)<\/span><\/a><\/td>95.3         84.9<\/td>77.0         59.5<\/td><\/tr>
T-Bletchley<\/td>97.1         87.4<\/td>80.2         62.3<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

T-Bletchley achieves state-of-the-art results in English-specific tasks compared to English-only models. T-Bletchley’s English performance is not hindered by universal language support!<\/p>\n\n\n\n

Universal<\/h3>\n\n\n\n

T-Bletchley’s universal retrieval capabilities were evaluated on the Multi30k, COCO-CN and COCO-JP datasets and compared to multilingual models. Even before fine-tuning, T-Bletchley significantly outperforms previous models.<\/p>\n\n\n\n

Setting<\/strong><\/td>Model<\/strong><\/td>Multi30k
French German Czech<\/strong><\/td>
COCO
Chinese Japanese <\/strong><\/td><\/tr>
Zero Shot<\/td>M3<\/sup>P (opens in new tab)<\/span><\/a>
T-Bletchley<\/td>
27.1       36.8       20.4
85.0       83.2       81.2<\/td>
32.3       33.3
81.5       64.8<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

When T-Bletchley is fine-tuned, the model sets new state-of-the-art results in multiple languages, shown in the table below.<\/p>\n\n\n\n

Setting<\/strong><\/td>Model<\/strong><\/td>Multi30k
French German Czech<\/strong><\/td>
COCO
Chinese Japanese <\/strong><\/td><\/tr>
Finetuned<\/td>MULE (opens in new tab)<\/span><\/a>
SMALR (opens in new tab)<\/span><\/a>
M3<\/sup>P (opens in new tab)<\/span><\/a>
T-Bletchley<\/td>
62.3       64.1       57.7
65.9       69.8       64.8
73.9       82.7       72.2
94.6       94.3       93.6<\/td>
75.6       75.9
76.7       77.5
86.2       87.9
89.0       86.3<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

Future applications<\/h3>\n\n\n\n

The goal of T-Bletchley is to create a model that understands text and images as seamlessly as humans do. The first version of T-Bletchley represents a significant breakthrough in this mission. We expect the T-Bletchley model to improve image question and answering, image search, and image-to-image search experiences in Bing, Microsoft Office and Azure.<\/p>\n\n\n\n

Note on Responsible AI:<\/strong>\u00a0Like other publicly available models, the Microsoft Turing models are trained with billions of pages of publicly available text\u00a0and images, and hence may have picked up biases around gender, race and more from these public documents. Mitigating negative effects from these biases is a difficult, industry-wide issue and Microsoft is committed to the advancement and use of AI grounded in principles that put people first and benefit society. We are putting these\u00a0Microsoft AI principles<\/a>\u00a0into practice throughout the company and have taken extensive precautionary measures to prevent these implicit biases\u00a0from\u00a0getting exhibited when using the models in our products. We strongly encourage developers to do the same by putting appropriate guardrails and mitigations in place before taking these models to production.<\/p>\n","protected":false},"excerpt":{"rendered":"

Today, the Microsoft Turing team (opens in new tab) is thrilled to introduce Turing Bletchley, a 2.5-billion parameter Universal Image Language Representation model (T-UILR) that can perform image-language tasks in 94 languages. T-Bletchley has an image encoder and a universal language encoder that vectorize input image and text respectively so that semantically similar images and texts align with each […]<\/p>\n","protected":false},"author":40735,"featured_media":791447,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-791159","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[691494,649749],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"\"An","byline":"Saurabh Tiwary","formattedDate":"November 1, 2021","formattedExcerpt":"Today, the Microsoft Turing team (opens in new tab) is thrilled to introduce Turing Bletchley, a 2.5-billion parameter Universal Image Language Representation model (T-UILR) that can perform image-language tasks in 94 languages. T-Bletchley has an image encoder and a universal language encoder that vectorize input image and text respectively…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/791159"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/40735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=791159"}],"version-history":[{"count":15,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/791159\/revisions"}],"predecessor-version":[{"id":817462,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/791159\/revisions\/817462"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/791447"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=791159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=791159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=791159"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=791159"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=791159"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=791159"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=791159"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=791159"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=791159"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=791159"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=791159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}