{"id":963351,"date":"2023-08-21T17:50:00","date_gmt":"2023-08-22T00:50:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=963351"},"modified":"2024-02-01T15:14:18","modified_gmt":"2024-02-01T23:14:18","slug":"mega-multilingual-evaluation-of-generative-ai","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mega-multilingual-evaluation-of-generative-ai\/","title":{"rendered":"MEGA: Multilingual Evaluation of Generative AI"},"content":{"rendered":"
Generative AI models have impressive performance on many Natural Language Processing tasks such as language understanding, reasoning and language generation. One of the most important questions that is being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative Large Language Models (LLMs) are restricted to English and it is unclear how capable these models are at understanding and generating other languages. We present the first comprehensive benchmarking of generative LLMs – MEGA, which evaluates models on standard NLP benchmarks, covering 8 diverse tasks and 33 typologically diverse languages. We also compare the performance of generative LLMs to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and discuss some of the reasons why generative LLMs are currently not optimal for all languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.<\/p>\n","protected":false},"excerpt":{"rendered":"
Generative AI models have impressive performance on many Natural Language Processing tasks such as language understanding, reasoning and language generation. One of the most important questions that is being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"msr-content-type":[3],"msr-research-highlight":[],"research-area":[13556,13545],"msr-publication-type":[193716],"msr-product-type":[],"msr-focus-area":[],"msr-platform":[],"msr-download-source":[],"msr-locale":[268875],"msr-post-option":[],"msr-field-of-study":[251857,263203,255439,268119],"msr-conference":[260143],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-963351","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-locale-en_us","msr-field-of-study-benchmarking","msr-field-of-study-computation-and-language","msr-field-of-study-human-language-technologies","msr-field-of-study-large-language-models-multilinguality"],"msr_publishername":"","msr_edition":"","msr_affiliation":"","msr_published_date":"2023-10-7","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/arxiv.org\/abs\/2303.12528","label_id":"243109","label":0}],"msr_related_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/arxiv.org\/pdf\/2303.12528.pdf","label_id":"243112","label":0}],"msr_attachments":[],"msr-author-ordering":[{"type":"guest","value":"kabir-ahuja","user_id":949593,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=kabir-ahuja"},{"type":"user_nicename","value":"Harshita Diddee","user_id":42459,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Harshita Diddee"},{"type":"text","value":"Rishav Hada","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Millicent Ochieng","user_id":40678,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Millicent Ochieng"},{"type":"text","value":"Krithika Ramesh","user_id":0,"rest_url":false},{"type":"text","value":"Prachi Jain","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Akshay Nambi","user_id":38169,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Akshay Nambi"},{"type":"user_nicename","value":"Tanuja Ganu","user_id":38883,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Tanuja Ganu"},{"type":"user_nicename","value":"Sameer Segal","user_id":40861,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Sameer Segal"},{"type":"user_nicename","value":"Maxamed Axmed","user_id":40888,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Maxamed Axmed"},{"type":"user_nicename","value":"Kalika Bali","user_id":32477,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Kalika Bali"},{"type":"user_nicename","value":"Sunayana Sitaram","user_id":37287,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Sunayana Sitaram"}],"msr_impact_theme":[],"msr_research_lab":[199562,1021599],"msr_event":[],"msr_group":[144940,643845],"msr_project":[950052,1068003],"publication":[1004169],"video":[],"download":[1004169],"msr_publication_type":"inproceedings","related_content":{"projects":[{"ID":950052,"post_title":"Project VeLLM","post_name":"project-vellm","post_type":"msr-project","post_date":"2023-06-16 19:33:56","post_modified":"2024-07-11 22:46:39","post_status":"publish","permalink":"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-vellm\/","post_excerpt":"uniVersal Empowerment with LLMs The technology landscape is being rapidly transformed by Large Language Models (LLMs), allowing users to address real-world applications in various domains. However, a digital divide exists that may exclude large populations from benefiting and contributing to this technological revolution due to factors such as language, income, digital awareness, and access to information. To address this issue, Project VeLLM (UniVersal Empowerment with Large Language Models) is focused on developing a principled approach to…","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/950052"}]}},{"ID":1068003,"post_title":"African Languages Research","post_name":"african-languages-research","post_type":"msr-project","post_date":"2024-08-14 06:39:23","post_modified":"2024-08-14 09:29:33","post_status":"publish","permalink":"https:\/\/www.microsoft.com\/en-us\/research\/project\/african-languages-research\/","post_excerpt":"Bridging technological and community divides by enriching Generative AI research with the rich diversity of African languages. Join us in shaping a more inclusive digital future Microsoft Research Africa is pioneering advancements in Generative AI through our African Languages Research initiative, focusing on advancing Large Language Models (LLMs) for African languages. Our goal is to enhance language accessibility and ensure AI tools can effectively engage with Africa's linguistic diversity, supporting both language preservation and the…","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1068003"}]}}]},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/963351"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/963351\/revisions"}],"predecessor-version":[{"id":1004160,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/963351\/revisions\/1004160"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=963351"}],"wp:term":[{"taxonomy":"msr-content-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-content-type?post=963351"},{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=963351"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=963351"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=963351"},{"taxonomy":"msr-product-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-product-type?post=963351"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=963351"},{"taxonomy":"msr-platform","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-platform?post=963351"},{"taxonomy":"msr-download-source","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-download-source?post=963351"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=963351"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=963351"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=963351"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=963351"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=963351"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=963351"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=963351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}