{"id":950052,"date":"2023-06-16T19:33:56","date_gmt":"2023-06-17T02:33:56","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=950052"},"modified":"2024-07-11T22:46:39","modified_gmt":"2024-07-12T05:46:39","slug":"project-vellm","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-vellm\/","title":{"rendered":"Project VeLLM"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Project VeLLM<\/h1>\n\n\n\n

uniVersal Empowerment with LLMs<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

The technology landscape is being rapidly transformed by Large Language Models (LLMs), allowing users to address real-world applications in various domains. However, a digital divide exists that may exclude large populations from benefiting and contributing to this technological revolution due to factors such as language, income, digital awareness, and access to information. To address this issue, Project VeLLM (UniVersal Empowerment with Large Language Models) is focused on developing a principled approach to enable inclusive applications of LLMs for all languages and cultures worldwide. This interdisciplinary research project is being conducted at Microsoft Research India in collaboration with partners across Microsoft. In Project VeLLM, we are working on the following fundamental research problems that are currently barriers for making LLMs inclusive to everyone:<\/p>\n\n\n\n

    \n
  1. Multilingual Language Models<\/li>\n\n\n\n
  2. Responsible AI and safety across languages and cultures<\/li>\n\n\n\n
  3. Multi-modal models<\/li>\n\n\n\n
  4. Knowledge representation and grounding<\/li>\n\n\n\n
  5. Cost and optimization<\/li>\n<\/ol>\n\n\n\n

    Multilingual Language Models:<\/strong> Our work focuses on the evaluation and improvement of LLMs on non-English languages. Towards this, we carried out a comprehensive evaluation of GPT models (opens in new tab)<\/span><\/a> (EMNLP 2023) and other LLMs on the MEGA benchmark that comprises of 16 datasets covering over 70 languages. Our current focus in this direction is on scaling up multilingual evaluation, including the use of LLM-based evaluators in the multilingual setting (opens in new tab)<\/span><\/a> with humans in the loop. <\/p>\n\n\n\n

    Responsible AI and safety across languages and cultures<\/strong>: Our focus in this direction is on defining and reducing bias in LLMs in non-English languages and cultures. Our survey (EACL 2023) describes the challenges in scaling fairness in languages beyond English (opens in new tab)<\/span><\/a> and our current work includes parameter efficient techniques to reduce bias in models across various dimensions of bias.<\/p>\n\n\n\n

    <\/p>\n\n\n","protected":false},"excerpt":{"rendered":"

    uniVersal Empowerment with LLMs The technology landscape is being rapidly transformed by Large Language Models (LLMs), allowing users to address real-world applications in various domains. However, a digital divide exists that may exclude large populations from benefiting and contributing to this technological revolution due to factors such as language, income, digital awareness, and access to […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,13545],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-950052","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[963351,976125,1031343,954183,954192,954423],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Tanuja Ganu","user_id":38883,"people_section":"Related people","alias":"taganu"},{"type":"user_nicename","display_name":"Akshay Nambi","user_id":38169,"people_section":"Related people","alias":"akshayn"},{"type":"user_nicename","display_name":"Kalika Bali","user_id":32477,"people_section":"Related people","alias":"kalikab"},{"type":"user_nicename","display_name":"Sunayana Sitaram","user_id":37287,"people_section":"Related people","alias":"susitara"},{"type":"user_nicename","display_name":"Manohar Swaminathan","user_id":35356,"people_section":"Related people","alias":"swmanoh@microsoft.com"},{"type":"user_nicename","display_name":"Saikat Guha","user_id":33493,"people_section":"Related people","alias":"saikat"},{"type":"user_nicename","display_name":"Vivek Seshadri","user_id":36323,"people_section":"Related people","alias":"visesha"},{"type":"user_nicename","display_name":"Mohit Jain","user_id":38769,"people_section":"Related people","alias":"mohja"},{"type":"user_nicename","display_name":"Kavyansh Chourasia","user_id":43029,"people_section":"Related people","alias":"kchourasia"}],"msr_research_lab":[199562],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/950052"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":5,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/950052\/revisions"}],"predecessor-version":[{"id":979659,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/950052\/revisions\/979659"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=950052"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=950052"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=950052"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=950052"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=950052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}