Venue: Meeting Room San Li Tun, 4th Floor, Microsoft Building 2, No. 5 Danling Street, Haidian District, Beijing, China.
Session | Time | Title and Abstract | Speakers | |
---|---|---|---|---|
Opening | 9:30-9:40 |
|
||
Keynote Speech | 9:40-10:10 | Societal AI: Tackling AI Challenges with Social Science Insights |
|
|
Break and Group Photo | 10:10-10:30 |
|
||
Research Talks and Panel Discussion1 | 10:30-11:40 |
LLM-driven social science and generative agents
This session aims to discuss the synergy between cutting-edge AI technologies and the ever-evolving field of (computational) social science. As large language models (LLMs) continue to revolutionize data analysis, predictive models, and content generation, their potential to transform (computational) social science research and practice becomes increasingly promising. In particular, we will delve into the current status and challenges of LLM-based social simulation. Participants will gain insights into how LLMs can be used to model complex social phenomena, simulate human behavior, and generate realistic social interactions. |
|
|
10:30-10:40 |
AI to transform social science and vice versa: studies on economics and cultural understanding
Generative AI has been transforming different research disciplinaries ranging from computer science, natural science, to social science. How to leverage the advanced GenAI technology to assist the research on social science, particularly on factors that deeply influence everyone? In this talk, I will share our latest efforts in two areas: economics and cultural understanding. Specifically, the first efforts aims to adopt GenAI to simulate the competition dynamics in society, which tries to achieve accurate and profound simulations. In the second work, we study how to leverage the social theories to help GenAI models better adapt to different cultures, given that current models are predominantly trained on Western cultures. I hope that these works can shed light on better co-adaptation of social science and GenAI research in the future. |
|
||
10:40-10:50 |
LLMob: An LLM Agent Framework for Personal Mobility Generation
This study introduces a novel approach using Large Language Models (LLMs) integrated into an agent framework for flexible and effective personal mobility generation. LLMs overcome the limitations of previous models by effectively processing semantic data and offering versatility in modeling various tasks. Our approach addresses three research questions: aligning LLMs with real-world urban mobility data, developing reliable activity generation strategies, and exploring LLM applications in urban mobility. The key technical contribution is a novel LLM agent framework that accounts for individual activity patterns and motivations, including a self-consistency approach to align LLMs with real-world activity data and a retrieval-augmented strategy for interpretable activity generation. We evaluate our LLM agent framework and compare it with state-of-the-art personal mobility generation approaches, demonstrating the effectiveness of our approach and its potential applications in urban mobility. Overall, this study marks the pioneering work of designing an LLM agent framework for activity generation based on real-world human activity data, offering a promising tool for urban mobility analysis. |
|
||
10:50-11:00 |
Designing Cognitive Theory-inspired LLM Agents for Efficient Human Behavior Simulation
The rapid advancement of Large Language Models (LLMs) has led to the emergence of human-like commonsense reasoning, sparking the development of numerous LLM agents. However, current LLM agents are often constrained by high computational costs. In this talk, I will introduce a cognitive theory-inspired framework that elicits the efficient reasoning in LLM agents. This framework harnesses the synergy between larger, cloud-based models and smaller, local models to improve reasoning efficiency and accuracy. By optimally assigning simpler tasks to smaller models and more complex tasks to larger models, we reduce computational overhead while maintaining high performance. Furthermore, I will present a human behavior simulation framework that can fully unleash the reasoning power of LLMs to mimic human cognitive process and generate realistic human behaviors. These works open up new possibilities to power social science research with low cost, reproducible experiments with Homo Silicus, i.e., computational human models driven by LLM agents. |
|
||
11:00-11:40 | Group Discussion 1 |
|
||
Lunch | 11:40-13:00 |
|
||
Research Talks and Panel Discussion 2 | 13:00-14:30 |
Aligning AI towards Human Values and Social Equity
This session will explore the capabilities AI must develop, beyond task performance, to function as a companion to humans — focusing on its alignment align with human values/ethics, cultural preferences, and achieving social equity. Drawing perspectives from computer science, social science, and philosophy, we will investigate how to assess AI’s value orientations, implement effective alignment methods, and eliminate social biases to foster fairness. Participants will gain insights into the technical and philosophical foundations of AI alignment and fairness, learning how AI can be designed to promote equitable outcomes and be benevolent toward the society as a whole. |
|
|
13:00-13:10 |
Building globally equitable generative AI
Whilst generative AI’s ability to process and generate human-like content has opened up new possibilities, it is not equally useful for everyone; because of this its impact is unlikely to be evenly distributed globally. In this talk I will discuss recent research which has shown that, when it comes to Africa, not only does generative AI have a language problem, equally, if not more importantly, it has a knowledge problem. I will describe how we have designed a program of human-centred AI research to address these challenges and build globally equitable AI. |
|
||
13:10-13:20 |
When Alignment meets o1
This talk presents initial discussions on alignment research following the release of OpenAI’s o1 model. (1) Challenge: Superalignment, where the unlocked potential of model capabilities reinforces the necessity of aligning superintelligence. (2) Opportunity: System2 Alignment, which suggests aligning the process rather than just the outcome, much like educating children by guiding the decision-making process, not just giving right-or-wrong answers. |
|
||
13:20-13:30 |
Dynamic Value Alignment: Enhancing User Autonomy Through Multi-agent, Moral Foundations Theoretical Framework
This talk presents an interdisciplinary project on value alignment in AI. First, I address key challenges such as context-sensitivity, moral complexity, equitable personalization, and user autonomy. Then, I draw on Moral Foundations Theory, Multi-Agent Design, and Evaluative AI frameworks to tackle these issues. By integrating Moral Foundations Theory, we capture the diversity of normative behaviors across cultures, while Multi-Agent Design enables flexible alignment with diverse value systems without extensive retraining. The Evaluative AI framework, unlike traditional recommendation models, provides balanced evidence for decision-making, ensuring interpretability and accountability. Throughout the presentation, I emphasize the importance of understanding human cognitive architecture, emotional influences, and human moral reasoning. The proposed solution highlights the crucial role of combining insights from philosophy, cognitive science, and computer science to create ethically aligned AI systems that are adaptable across diverse cultural and professional settings. |
|
||
13:30-13:40 |
Mapping out a human rights-based approach to AI: Contexutalizing principles through processes At times it seems there are more frameworks describing ethical AI than grains of sand on the beach. What distinguishes ours from the rest? We will present our model, which we have been refining for the past three years in active dialogue with a number of generous contributors from various industries, disciplines, and backgrounds. But we are even more keen to hear the feedback and reactions from this distinguished audience, so will list our contact information in advance. Please generously share with us your insights and wisdom: ssonnenberg@snu.ac.kr / yonglim@snu.ac.kr. Since 2022, Seoul National University’s Artificial Intelligence Policy Initiative (SAPI) has been working with a prominent policy think tank in Geneva and a variety of diplomats, corporate executives, venture capitalists, technologists, ESG experts, scholars, and activists to develop what we are calling a “Human Rights Based Approach to New and Emerging Technologies”, or HRBA@Tech – 2022 framework (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4587332) / 2023 application to AI startups (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4880112). Our model was conceived from the ground-up: learning from the experience of those who have been working to build trustworthy AI (and other emerging technologies). We highlight 5 ways in which our model is different from the vast majority of existing frameworks, and speculate that this model can be useful for corporations seeking to develop AI that is not only safe, but also contributes to making our world a better place to live. |
|
||
13:40-13:50 |
An Adaptive and Robust Evaluation Framework of LLM Values
Aligning LLMs with human values is essential for ethical AI deployment, yet it requires a comprehensive understanding of the value orientations embedded in these models. We focus on the generative evaluation paradigm to directly deciphers LLMs’ values from their generated responses. This paradigm relies on reference-free value evaluators, however, two key challenges emerge: the evaluator should adapt to changing human value definitions, against their own bias (adaptability); and remain robust across varying value expressions and scenarios (generalizability). To handle these challenges, we introduce CLAVE, a novel framework that integrates two complementary LLMs: a large model to extract high-level value concepts from diverse responses, leveraging its extensive knowledge, and a small model fine-tuned on these concepts to adapt to human value annotations. This dual-model framework serve as an optimal balance of the two challenges. Based on the generative evaluation paradigm, we create a comprehensive value leaderboard that tests a diverse array of value systems across various LLMs, which also enables us to compare the alignment between the values of different countries and those of LLMs, thereby identifying the models that most closely align with specific cultural or even personalized values. |
|
||
13:50-14:30 | Group Discussion 2 |
|
||
Break | 14:30-14:45 |
|
||
Research Talks and Panel Discussion 3 | 14:45-16:15 |
New Opportunities and Challenges from Generative AI for Society
Generative AI like ChatGPT has gained wide popularity and adoption. Like other historical disruptive technologies, its impact on society will be deep and complex. In this session, we discuss the opportunities and challenges brought by Generative AI to society, and how will human and society be reshaped by Generative AI. |
|
|
14:45-14:55 |
The social roots of AI assisted policymaking: evidence from survey experiment
Artificial intelligence (AI) is increasingly influential in public policy areas, including election forecasting and targeted service delivery. Previous studies have recognized AI algorithms as tools for policy-makers, examining their effects on government performance. However, the political implications of AI on public perception, particularly regarding AI-driven public services and government agency views, remain underexplored. This study uses a randomized experiment with a vignette design to investigate AI’s impact on political preferences through automated decision-making (ADM). Our findings reveal that ADM notably enhances public trust in policymaking, although this trust varies among individuals. Additionally, ADM significantly boosts people’s sense of internal political efficacy and their preference for scientifically informed policymaking. |
|
||
14:55-15:05 |
Disclosing use of AI in the generation of synthetic content: a regulatory perspective
Lawmakers around the world are introducing regulations requiring transparency in the use of AI across various contexts. The proposed Australian Guardrails for High-Risk AI framework also recommends that the use of AI in generating synthetic content should be disclosed. This presentation explores the challenges of establishing rules for when and how the use of generative AI should be disclosed in relation to synthetic content. It draws on a public survey we conducted to examine public opinions on when the use of generative AI should be disclosed, depending on the extent of its involvement in creating a particular piece of content. |
|
||
15:05-15:15 |
Regulatory Frameworks for Generative AI: Jurisdictional Perspectives
My talk will examine various jurisdictional approaches to addressing potential societal harms associated with generative AI, focusing on: (i) the U.S.’s federal implementation of Executive Order 14110 and California SB-1047 (vetoed), (ii) China’s Generative AI Interim Measures, AI Safety Governance Framework, and Scholar Draft of the AI Act, (iii) the EU’s AI Act, and (iv) Korea’s AI Bill and draft AI privacy frameworks. Key topics will include each jurisdiction’s approach to issues such as (i) public safety and security, (ii) infringement harms (copyright and privacy), (iii) challenges associated with deepfake and other synthetic media, and (iv) other emerging concerns. |
|
||
15:15-15:25 |
Japan’s Approach to AI Governance — How to build an interoperable regulatory framework?
This presentation will provide an overview of Japan’s social and cultural landscape for promoting AI and the current regulatory frameworks supporting AI development. It will then examine global legal approaches to AI and the progress of international collaboration through the G7 Hiroshima AI Process, emphasizing key differences among G7 member states. This analysis aims to guide the audience in reconsidering the scope and feasibility of achieving an internationally interoperable regulatory framework for AI. |
|
||
15:25-16:15 | Group Discussion 3 |
|
||
Closing | 16:15-16:20 |
|