{"id":1100883,"date":"2024-11-07T07:13:36","date_gmt":"2024-11-07T15:13:36","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=1100883"},"modified":"2024-11-07T12:47:47","modified_gmt":"2024-11-07T20:47:47","slug":"llm2clip","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/llm2clip\/","title":{"rendered":"LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"LLM2CLIP\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation<\/h1>\n\n\n\n

Makes SOTA pretrained CLIP model more SOTA ever.<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

CLIP is one of the most important multimodal foundational models today, aligning visual and textual signals into a shared feature space using a simple contrastive learning loss on large-scale image-text pairs. What powers CLIP\u2019s capabilities? The rich supervision signals provided by natural language \u2014 the carrier of human knowledge \u2014 shape a powerful cross-modal representation space. As a result, CLIP supports a variety of tasks, including zero-shot classification, detection, segmentation, and cross-modal retrieval, significantly influencing the entire multimodal domain.
However, with the rapid advancements in large language models (LLMs) like GPT-4 and LLaMA, the boundaries of language comprehension and generation are continually being pushed. This raises an intriguing question: can the capabilities of LLMs be harnessed to further improve multimodal representation learning?<\/strong><\/em><\/p>\n\n\n\n

\n
\"diagram\"<\/figure>\n<\/div>\n\n\n\n
\n


The potential benefits of incorporating LLMs into CLIP are clear. LLMs\u2019 strong textual understanding can fundamentally improve CLIP\u2019s ability to handle image captions, drastically enhancing its ability to process long and complex texts \u2014 a well-known limitation of vanilla CLIP. Moreover, LLMs are trained on a vast corpus of text, possessing open-world knowledge. This allows them to expand on caption information during training, increasing the efficiency of the learning process.<\/p>\n<\/div>\n\n\n\n


However, realizing this potential is challenging. Despite LLMs’ powerful internal comprehension, their autoregressive nature hides this capability within the model, leading to output features with poor discriminability. Our experiments show that directly integrating LLMs into CLIP results in catastrophic performance drops.<\/p>\n\n\n\n

We propose LLM2CLIP<\/strong>, a novel approach that embraces the power of LLMs to unlock CLIP\u2019s potential. By fine-tuning the LLM in the caption space with contrastive learning, we extract its textual capabilities into the output embeddings, significantly improving the output layer’s textual discriminabil. We then design an efficient training process where the fine-tuned LLM acts as a powerful teacher for CLIP\u2019s visual encoder. Thanks to the LLM\u2019s presence, we can now incorporate longer and more complex captions without being restricted by vanilla CLIP text encoder\u2019s context window and ability limitations. Our experiments demonstrate that this approach brings substantial improvements in cross-modal tasks. Our method directly boosted the performance of the previously SOTA EVA02 model by 16.5% on both long-text and short-text retrieval tasks, transforming a CLIP model trained solely on English data into a state-of-the-art cross-lingual model. Moreover, when integrated into multimodal training with models like Llava 1.5, it consistently outperformed CLIP across nearly all benchmarks, demonstrating comprehensive performance improvements.<\/p>\n\n\n\n

Webpage<\/strong>: https:\/\/aka.ms\/llm2clip (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

Github<\/strong>: https:\/\/github.com\/microsoft\/LLM2CLIP (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

Models<\/strong>: https:\/\/huggingface.co\/collections\/microsoft\/llm2clip-672323a266173cfa40b32d4c (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/p>\n","protected":false},"excerpt":{"rendered":"

Makes SOTA pretrained CLIP model more SOTA ever. CLIP is one of the most important multimodal foundational models today, aligning visual and textual signals into a shared feature space using a simple contrastive learning loss on large-scale image-text pairs. What powers CLIP\u2019s capabilities? The rich supervision signals provided by natural language \u2014 the carrier of […]<\/p>\n","protected":false},"featured_media":1100919,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":true,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-1100883","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Yifan Yang","user_id":41539,"people_section":"Related people","alias":"yifanyang"},{"type":"user_nicename","display_name":"Xufang Luo","user_id":40324,"people_section":"Related people","alias":"xufluo"},{"type":"user_nicename","display_name":"Yuqing Yang","user_id":40654,"people_section":"Related people","alias":"yuqyang"},{"type":"user_nicename","display_name":"Qi Dai","user_id":36689,"people_section":"Related people","alias":"qid"},{"type":"user_nicename","display_name":"Xiyang Dai","user_id":40384,"people_section":"Related people","alias":"xidai"},{"type":"user_nicename","display_name":"Dongdong Chen","user_id":40198,"people_section":"Related people","alias":"dochen"},{"type":"user_nicename","display_name":"Chong Luo","user_id":31450,"people_section":"Related people","alias":"cluo"},{"type":"user_nicename","display_name":"Lili Qiu","user_id":41320,"people_section":"Related people","alias":"liliqiu"}],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1100883"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":5,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1100883\/revisions"}],"predecessor-version":[{"id":1102014,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1100883\/revisions\/1102014"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1100919"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1100883"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1100883"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1100883"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1100883"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1100883"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}