{"id":950022,"date":"2023-06-16T16:11:13","date_gmt":"2023-06-16T23:11:13","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=950022"},"modified":"2023-07-09T05:31:10","modified_gmt":"2023-07-09T12:31:10","slug":"acl-2023-multilingual-models-tutorial","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/acl-2023-multilingual-models-tutorial\/","title":{"rendered":"ACL 2023 Multilingual Models Tutorial"},"content":{"rendered":"\n\n\n\n\n
Date\/time<\/strong>: July 9, 2023 | 9:00 AM – 12:30 PM<\/p>\n\n\n\n Location<\/strong>: Metropolitan West, Westin Harbour Castle, Toronto, Canada<\/p>\n\n\n\n This tutorial workshop is co-located with\u00a0ACL 2023 (opens in new tab)<\/span><\/a>.<\/em><\/p>\n\n\n\n Tutorial Slides<\/a><\/mark><\/strong><\/p>\n\n\n\n The technology landscape is being rapidly transformed by Large Language Models (LLMs), allowing users to address real-world applications in various domains. However, a digital divide<\/em> exists that may exclude large populations from benefiting and contributing to this technological revolution due to factors such as language, income, digital awareness, and access to information. At Microsoft, we are dedicated to making Large Language Models inclusive to everyone on the planet.<\/p>\n\n\n\n This tutorial will describe various aspects of scaling up language technologies to many of the world\u2019s languages by presenting the latest research in Massively Multilingual Language Models (MMLMs). We will cover topics such as data collection, training and fine-tuning of models, Responsible AI issues such as fairness, bias and toxicity, linguistic diversity and evaluation in the context of MMLMs, specifically focusing on issues in non-English and low-resource languages. Further, we will also talk about some of the real-world challenges in deploying these models in language communities in the field.<\/p>\n\n\n\n <\/p>\n\n\n\n Data Collection and Training of\u00a0Multilingual LLMs<\/strong><\/p>\n\n\n\n Prompting Strategies for Multilingual LLMs<\/strong><\/p>\n\n\n\n Evaluation, Interpretability and Analysis of Multilingual LLMs<\/strong><\/p>\n\n\n\n Datasets<\/p>\n\n\n\n Benchmarking Exercises<\/p>\n\n\n\n Evaluation Beyond Task Performance<\/p>\n\n\n\n Challenges in Multilingual Evaluation<\/p>\n\n\n\n Analysis and Interpretability<\/p>\n\n\n\nTutorial topics<\/h3>\n\n\n\n
\n
Organizing committee<\/h3>\n\n\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought
reasoners. arXiv preprint arXiv:2210.03057, 2022.<\/li>\n<\/ul>\n\n\n\n\n
\n
\n
\n
\n
\n
effective prompting. arXiv preprint arXiv:2210.13838, 2022.<\/li>\n<\/ul>\n\n\n\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n