{"id":1096245,"date":"2024-10-23T03:21:19","date_gmt":"2024-10-23T10:21:19","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=1096245"},"modified":"2024-11-13T01:25:24","modified_gmt":"2024-11-13T09:25:24","slug":"societal-ai-tab-workshop","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/societal-ai-tab-workshop\/","title":{"rendered":"2024 MSR Asia TAB Workshop: Shaping the Future with Societal AI"},"content":{"rendered":"\n\n\n\n\n
As AI continues to advance and its societal impact deepens, it presents both unprecedented opportunities for progress and significant challenges that require careful navigation. No longer merely a tool, AI is evolving into a companion to humans, reshaping the way we live and work, and calling for new frameworks to understand and govern its role. This workshop at MSR Asia TAB 2024 will bring together internal and external researchers to explore critical themes such as AI evaluation, value alignment, and its far-reaching influence on productivity, education, research, and employment. By fostering interdisciplinary collaboration, we aim to harness AI\u2019s potential while ensuring it serves the long-term interests of humanity.<\/p>\n\n\n\n
The goals of the workshop include, but are not limited to:<\/p>\n\n\n\n
Organizing Committee<\/strong><\/p>\n\n\n\n Xing Xie (Chair), Beibei Shi (Chair), Xiaoyuan Yi (Co-Chair), Fangzhao Wu (Co-Chair), Jianxun Lian (Co-Chair), Miran Lee, Binghao Huan<\/p>\n\n\n\n Venue: Meeting Room San Li Tun, 4th Floor, Microsoft Building 2, No. 5 Danling Street, Haidian District, Beijing, China.<\/strong><\/p>\n\n\n\n\n\t\t\n\t\t\t
\n\t\t\t\t Session<\/th>\n\t\t\t\t Time<\/th>\n\t\t\t\t Title and Abstract<\/th>\n\t\t\t\t Speakers<\/th>\n\t\t\t<\/tr>\n\t\t\t \n\t\t\t\t Opening<\/td>\n\t\t\t\t 9:30-9:40<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Keynote Speech<\/td>\n\t\t\t\t 9:40-10:10<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\tSocietal AI: Tackling AI Challenges with Social Science Insights<\/strong>\n\t\t\t\t<\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Break and Group Photo<\/td>\n\t\t\t\t 10:10-10:30<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Research Talks and Panel Discussion1<\/td>\n\t\t\t\t 10:30-11:40<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\tLLM-driven social science and generative agents<\/strong>\n\t\t\t\t\t
This session aims to discuss the synergy between cutting-edge AI technologies and the ever-evolving field of (computational) social science. As large language models (LLMs) continue to revolutionize data analysis, predictive models, and content generation, their potential to transform (computational) social science research and practice becomes increasingly promising. In particular, we will delve into the current status and challenges of LLM-based social simulation. Participants will gain insights into how LLMs can be used to model complex social phenomena, simulate human behavior, and generate realistic social interactions.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 10:30-10:40<\/td>\n\t\t\t\t \n\t\t\t\t\tAI to transform social science and vice versa: studies on economics and cultural understanding<\/strong>\n\t\t\t\t\t
Generative AI has been transforming different research disciplinaries ranging from computer science, natural science, to social science. How to leverage the advanced GenAI technology to assist the research on social science, particularly on factors that deeply influence everyone? In this talk, I will share our latest efforts in two areas: economics and cultural understanding. Specifically, the first efforts aims to adopt GenAI to simulate the competition dynamics in society, which tries to achieve accurate and profound simulations. In the second work, we study how to leverage the social theories to help GenAI models better adapt to different cultures, given that current models are predominantly trained on Western cultures. I hope that these works can shed light on better co-adaptation of social science and GenAI research in the future.\n\t\t\t\t\t\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 10:40-10:50<\/td>\n\t\t\t\t \n\t\t\t\t\tLLMob: An LLM Agent Framework for Personal Mobility Generation<\/strong>\n\t\t\t\t\t
This study introduces a novel approach using Large Language Models (LLMs) integrated into an agent framework for flexible and effective personal mobility generation. LLMs overcome the limitations of previous models by effectively processing semantic data and offering versatility in modeling various tasks. Our approach addresses three research questions: aligning LLMs with real-world urban mobility data, developing reliable activity generation strategies, and exploring LLM applications in urban mobility. The key technical contribution is a novel LLM agent framework that accounts for individual activity patterns and motivations, including a self-consistency approach to align LLMs with real-world activity data and a retrieval-augmented strategy for interpretable activity generation. We evaluate our LLM agent framework and compare it with state-of-the-art personal mobility generation approaches, demonstrating the effectiveness of our approach and its potential applications in urban mobility. Overall, this study marks the pioneering work of designing an LLM agent framework for activity generation based on real-world human activity data, offering a promising tool for urban mobility analysis.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 10:50-11:00<\/td>\n\t\t\t\t \n\t\t\t\t\tDesigning Cognitive Theory-inspired LLM Agents for Efficient Human Behavior Simulation<\/strong>\n\t\t\t\t\t
The rapid advancement of Large Language Models (LLMs) has led to the emergence of human-like commonsense reasoning, sparking the development of numerous LLM agents. However, current LLM agents are often constrained by high computational costs. In this talk, I will introduce a cognitive theory-inspired framework that elicits the efficient reasoning in LLM agents. This framework harnesses the synergy between larger, cloud-based models and smaller, local models to improve reasoning efficiency and accuracy. By optimally assigning simpler tasks to smaller models and more complex tasks to larger models, we reduce computational overhead while maintaining high performance. Furthermore, I will present a human behavior simulation framework that can fully unleash the reasoning power of LLMs to mimic human cognitive process and generate realistic human behaviors. These works open up new possibilities to power social science research with low cost, reproducible experiments with Homo Silicus, i.e., computational human models driven by LLM agents.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 11:00-11:40<\/td>\n\t\t\t\t \n\t\t\t\t\tGroup Discussion 1<\/strong>\n\t\t\t\t<\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Lunch<\/td>\n\t\t\t\t 11:40-13:00<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Research Talks and Panel Discussion 2<\/td>\n\t\t\t\t 13:00-14:30<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\tAligning AI towards Human Values and Social Equity<\/strong>\n\t\t\t\t\t
This session will explore the capabilities AI must develop, beyond task performance, to function as a companion to humans \u2014 focusing on its alignment align with human values\/ethics, cultural preferences, and achieving social equity. Drawing perspectives from computer science, social science, and philosophy, we will investigate how to assess AI\u2019s value orientations, implement effective alignment methods, and eliminate social biases to foster fairness. Participants will gain insights into the technical and philosophical foundations of AI alignment and fairness, learning how AI can be designed to promote equitable outcomes and be benevolent toward the society as a whole.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 13:00-13:10<\/td>\n\t\t\t\t \n\t\t\t\t\tBuilding globally equitable generative AI<\/strong>\n\t\t\t\t\t
Whilst generative AI\u2019s ability to process and generate human-like content has opened up new possibilities, it is not equally useful for everyone; because of this its impact is unlikely to be evenly distributed globally. In this talk I will discuss recent research which has shown that, when it comes to Africa, not only does generative AI have a language problem, equally, if not more importantly, it has a knowledge problem. I will describe how we have designed a program of human-centred AI research to address these challenges and build globally equitable AI.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 13:10-13:20<\/td>\n\t\t\t\t \n\t\t\t\t\tWhen Alignment meets o1<\/strong>\n\t\t\t\t\t
This talk presents initial discussions on alignment research following the release of OpenAI\u2019s o1 model. (1) Challenge: Superalignment, where the unlocked potential of model capabilities reinforces the necessity of aligning superintelligence. (2) Opportunity: System2 Alignment, which suggests aligning the process rather than just the outcome, much like educating children by guiding the decision-making process, not just giving right-or-wrong answers.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 13:20-13:30<\/td>\n\t\t\t\t \n\t\t\t\t\tDynamic Value Alignment: Enhancing User Autonomy Through Multi-agent, Moral Foundations Theoretical Framework<\/strong>\n\t\t\t\t\t
This talk presents an interdisciplinary project on value alignment in AI. First, I address key challenges such as context-sensitivity, moral complexity, equitable personalization, and user autonomy. Then, I draw on Moral Foundations Theory, Multi-Agent Design, and Evaluative AI frameworks to tackle these issues. By integrating Moral Foundations Theory, we capture the diversity of normative behaviors across cultures, while Multi-Agent Design enables flexible alignment with diverse value systems without extensive retraining. The Evaluative AI framework, unlike traditional recommendation models, provides balanced evidence for decision-making, ensuring interpretability and accountability. Throughout the presentation, I emphasize the importance of understanding human cognitive architecture, emotional influences, and human moral reasoning. The proposed solution highlights the crucial role of combining insights from philosophy, cognitive science, and computer science to create ethically aligned AI systems that are adaptable across diverse cultural and professional settings.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 13:30-13:40<\/td>\n\t\t\t\t \n\t\t\t\t\tMapping out a human rights-based approach to AI: Contexutalizing principles through processes<\/strong>
\nAt times it seems there are more frameworks describing ethical AI than grains of sand on the beach. What distinguishes ours from the rest? We will present our model, which we have been refining for the past three years in active dialogue with a number of generous contributors from various industries, disciplines, and backgrounds. But we are even more keen to hear the feedback and reactions from this distinguished audience, so will list our contact information in advance. Please generously share with us your insights and wisdom: ssonnenberg@snu.ac.kr \/ yonglim@snu.ac.kr. Since 2022, Seoul National University’s Artificial Intelligence Policy Initiative (SAPI) has been working with a prominent policy think tank in Geneva and a variety of diplomats, corporate executives, venture capitalists, technologists, ESG experts, scholars, and activists to develop what we are calling a “Human Rights Based Approach to New and Emerging Technologies”, or HRBA@Tech – 2022 framework (https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4587332) \/ 2023 application to AI startups (https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4880112). Our model was conceived from the ground-up: learning from the experience of those who have been working to build trustworthy AI (and other emerging technologies). We highlight 5 ways in which our model is different from the vast majority of existing frameworks, and speculate that this model can be useful for corporations seeking to develop AI that is not only safe, but also contributes to making our world a better place to live. \n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 13:40-13:50<\/td>\n\t\t\t\t \n\t\t\t\t\tAn Adaptive and Robust Evaluation Framework of LLM Values<\/strong>\n\t\t\t\t\t
Aligning LLMs with human values is essential for ethical AI deployment, yet it requires a comprehensive understanding of the value orientations embedded in these models. We focus on the generative evaluation paradigm to directly deciphers LLMs’ values from their generated responses. This paradigm relies on reference-free value evaluators, however, two key challenges emerge: the evaluator should adapt to changing human value definitions, against their own bias (adaptability); and remain robust across varying value expressions and scenarios (generalizability). To handle these challenges, we introduce CLAVE, a novel framework that integrates two complementary LLMs: a large model to extract high-level value concepts from diverse responses, leveraging its extensive knowledge, and a small model fine-tuned on these concepts to adapt to human value annotations. This dual-model framework serve as an optimal balance of the two challenges. Based on the generative evaluation paradigm, we create a comprehensive value leaderboard that tests a diverse array of value systems across various LLMs, which also enables us to compare the alignment between the values of different countries and those of LLMs, thereby identifying the models that most closely align with specific cultural or even personalized values.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 13:50-14:30<\/td>\n\t\t\t\t \n\t\t\t\t\tGroup Discussion 2<\/strong>\n\t\t\t\t<\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Break<\/td>\n\t\t\t\t 14:30-14:45<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Research Talks and Panel Discussion 3<\/td>\n\t\t\t\t 14:45-16:15<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\tNew Opportunities and Challenges from Generative AI for Society<\/strong>\n\t\t\t\t\t
Generative AI like ChatGPT has gained wide popularity and adoption. Like other historical disruptive technologies, its impact on society will be deep and complex. In this session, we discuss the opportunities and challenges brought by Generative AI to society, and how will human and society be reshaped by Generative AI.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 14:45-14:55<\/td>\n\t\t\t\t \n\t\t\t\t\tThe social roots of AI assisted policymaking: evidence from survey experiment<\/strong>\n\t\t\t\t\t
Artificial intelligence (AI) is increasingly influential in public policy areas, including election forecasting and targeted service delivery. Previous studies have recognized AI algorithms as tools for policy-makers, examining their effects on government performance. However, the political implications of AI on public perception, particularly regarding AI-driven public services and government agency views, remain underexplored. This study uses a randomized experiment with a vignette design to investigate AI’s impact on political preferences through automated decision-making (ADM). Our findings reveal that ADM notably enhances public trust in policymaking, although this trust varies among individuals. Additionally, ADM significantly boosts people’s sense of internal political efficacy and their preference for scientifically informed policymaking.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 14:55-15:05<\/td>\n\t\t\t\t \n\t\t\t\t\tDisclosing use of AI in the generation of synthetic content: a regulatory perspective<\/strong>\n\t\t\t\t\t
Lawmakers around the world are introducing regulations requiring transparency in the use of AI across various contexts. The proposed Australian Guardrails for High-Risk AI framework also recommends that the use of AI in generating synthetic content should be disclosed. This presentation explores the challenges of establishing rules for when and how the use of generative AI should be disclosed in relation to synthetic content. It draws on a public survey we conducted to examine public opinions on when the use of generative AI should be disclosed, depending on the extent of its involvement in creating a particular piece of content. \n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 15:05-15:15<\/td>\n\t\t\t\t \n\t\t\t\t\tRegulatory Frameworks for Generative AI: Jurisdictional Perspectives<\/strong>\n\t\t\t\t\t
My talk will examine various jurisdictional approaches to addressing potential societal harms associated with generative AI, focusing on: (i) the U.S.\u2019s federal implementation of Executive Order 14110 and California SB-1047 (vetoed), (ii) China\u2019s Generative AI Interim Measures, AI Safety Governance Framework, and Scholar Draft of the AI Act, (iii) the EU\u2019s AI Act, and (iv) Korea\u2019s AI Bill and draft AI privacy frameworks. Key topics will include each jurisdiction\u2019s approach to issues such as (i) public safety and security, (ii) infringement harms (copyright and privacy), (iii) challenges associated with deepfake and other synthetic media, and (iv) other emerging concerns.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 15:15-15:25<\/td>\n\t\t\t\t \n\t\t\t\t\tJapan’s Approach to AI Governance — How to build an interoperable regulatory framework?<\/strong>\n\t\t\t\t\t
This presentation will provide an overview of Japan’s social and cultural landscape for promoting AI and the current regulatory frameworks supporting AI development. It will then examine global legal approaches to AI and the progress of international collaboration through the G7 Hiroshima AI Process, emphasizing key differences among G7 member states. This analysis aims to guide the audience in reconsidering the scope and feasibility of achieving an internationally interoperable regulatory framework for AI.\n\t\t\t\t<\/td>\n\t\t\t\t\n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t 15:25-16:15<\/td>\n\t\t\t\t \n\t\t\t\t\tGroup Discussion 3<\/strong>\n\t\t\t\t<\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t
\n\t\t\t\t Closing<\/td>\n\t\t\t\t 16:15-16:20<\/td>\n\t\t\t\t <\/td>\n\t\t\t\t <\/td>\n\t\t\t\t \n\t\t\t\t\t \n\t\t\t\t\t\t