Societal AI

Lecture 1:  Responsibility, Interpretability and Controllability of Human Behaviors (Date: 2022/11/21)

  • On the invitation of Microsoft Research Asia, Professor Wan Xiaohong from the State Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University, and a researcher at the IDG McGovern Institute for Brain Research, delivered an online lecture on the theme of “Responsibility, Explainability, and Controllability of Human Behavior.” This lecture marked the first installment of the “Responsible Artificial Intelligence” series, chaired by Senior Researcher Wang Xiting from Microsoft Research Asia.

    Professor Wan Xiaohong discussed the criteria used to assess whether an individual should be held accountable for their actions, focusing on the explainability and controllability of behavior. She highlighted the inherent challenges in evaluating the intangible cognitive processes and internal states of the brain, which are difficult for both external observers and the individuals themselves to assess in terms of detailed states and causal relationships. Many human behaviors are driven by rapid, intuitive processes that leave room for unfounded post hoc rationalizations. Even controlled processes often come with explanations that are largely vague and biased.

    To address these issues, Professor Wan approached the topic from the perspective of the neuro-mechanisms of human behavior, expanding on the mechanisms and algorithms of human-human and human-machine joint decision-making. She proposed research paradigms and theoretical models to advance the field of human-machine hybrid intelligence.

    The lecture was met with enthusiastic response and active interaction from the researchers. Questions were raised by researchers such as Li Dongsheng, Xie Xing, and Yi Xiaoyuan on topics including the technical means of activating neurons, experimental frequency and costs, and the ethics of animal experiments. Further discussions were held on whether this brain function has played a positive role in human evolution and whether moral decision-making belongs to System 1 or System 2.

    This lecture was the first in the “Responsible Artificial Intelligence” series, with subsequent talks on the theme to be announced.

  • a man wearing glasses and smiling at the camera

    Wan Xiaohong

    Professor Wan Xiaohong, Professor at the State Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University and Researcher at the IDG McGovern Institute for Brain Research.

Lecture 2: Towards a Holistic Framework for Responsible AI (Date: 2022/11/29)

  • At the invitation of Microsoft Research Asia, Associate Professor Steven Euijong Whang from the Korea Advanced Institute of Science and Technology (KAIST) delivered an online lecture on “Building a Comprehensive Framework for Responsible Artificial Intelligence.” Whang emphasized that to ensure the responsibility of AI, it is essential to not only enhance model accuracy during training but also to ensure fairness, robustness, explainability, and privacy protection. He highlighted that these considerations apply to all machine learning steps, beginning with data, necessitating a holistic framework that supports these goals.

    During the lecture, Whang presented a range of research outcomes from his team on AI fairness, robustness, explainability, and privacy. He also proposed several potential collaboration directions, expressing a desire to further academic cooperation with Microsoft Research Asia on these topics. The session was interactive, with active participation from researchers including Wang Jindong and Wu Fangzhao, who asked questions about variable control in AI fairness and robustness research and data selection for training. Whang provided detailed answers, contributing to a lively online discussion.

    This lecture was the second in the “Responsible Artificial Intelligence” series, hosted by Chief Researcher Xie Xing from Microsoft Research Asia, indicating an ongoing commitment to exploring and promoting responsible AI practices within the tech community and beyond.

  • a man wearing glasses and smiling at the camera

    Steven Euijong Whang

    Steven Euijong Whang, Associate Professor, Korea Advanced Institute of Science and Technology

Lecture 3: Towards Holistic Adversarial Robustness for Deep Learning (Date: 2022/12/13)

  • Invited by Microsoft Research Asia, Chief Scientist Pin-Yu Chen from IBM Thomas J. Watson Research Center presented an online talk on “AI Model Detectors: Toward Holistic Adversarial Robustness in Deep Learning” as part of the “Responsible AI” lecture series. This session, the third in the series, was moderated by Senior Researcher Wang Jindong from Microsoft Research Asia.

    Chen offered insights into the field of adversarial machine learning, focusing on his significant research contributions. These include the development of optimization-driven adversarial attacks, their implications for model explainability and scientific discovery, the implementation of versatile defense strategies for model rectification, robustness assessment techniques that are independent of specific attacks, and efficient transfer learning through model reprogramming.

    The interactive event saw active engagement from participants, with researchers like Zhu Bin, Xie Xing, Zhang Huishuai, and Yi Xiaoyuan posing questions on various aspects of Chen’s research. Dr. Chen responded with detailed explanations, enhancing the understanding of the audience.

    This lecture series underscores the importance of integrating adversarial robustness into AI development to ensure the creation of secure and reliable intelligent systems, fostering further dialogue and collaboration in the realm of responsible AI.

  • a man wearing glasses and smiling at the camera

    Pin-Yu Chen

    Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-HBM AI Research Collaboration and Pl of ongoing MIT-IBM Watson Al Lab projects.

Lecture 4: Recent Advances in Robust Machine Learning (Date: 2023/1/18)

  • When machine learning systems are trained and deployed in the real world, we face various types of uncertainty. For example, training data at hand may contain insufficient information, label noise, and bias. In this talk, I will give an overview of our recent advances in robust machine learning, including weakly supervised classification (positive-unlabeled classification, positive-confidence classification, complementary-label classification, etc.), noisy label learning (noise transition estimation, instance-dependent noise, clean sample selection, etc.), and domain adaptation (joint importance-predictor learning for covariate shift adaptation, dynamic importance-predictor learning for full distribution shift, etc.).

  • a man wearing glasses and smiling at the camera

    Professor Masashi Sugiyama

    Masashi Sugiyama received a Ph.D. in Computer Science from Tokyo Institute of Technology in 2001. He has been a Professor at the University of Tokyo since 2014 and concurrently Director of the RIKEN Center for Advanced Intelligence Project (AIP) since 2016.

Lecture 5: 人工智能应用的伦理关切及其治理逻辑 (Date: 2023/03/16)

  • Artificial Intelligence (AI) is propelling society into an era of intelligence at an unprecedented pace, transforming both production and lifestyle. Alongside these transformations, AI has given rise to ethical challenges such as manipulation, “black box” operations, discrimination, privacy concerns, and accountability dilemmas. Invited by Microsoft Research Asia, Professor Meng Tianguang, Vice Dean and Tenured Professor at the School of Social Sciences, Tsinghua University, presented a lecture titled “AI Ethics Concerns and Governance Logic.”

    In this lecture, Professor Meng approached the topic from an interdisciplinary perspective, providing an insightful discussion on the eight dimensions of AI ethics: personal information protection, fairness, transparency, safety, responsibility, authenticity, human dignity, and human autonomy. Utilizing this framework, he conducted public opinion surveys and social media data mining to meticulously analyze societal concerns regarding AI ethics and their characteristics. He further explored solutions to AI ethical governance from the dimensions of rights, industry, and profession.

    The seminar was vibrant, with participants actively engaging with the presentation and delving into discussions with Professor Meng on topics such as copyright of AI-generated content, self-regulation of AI, and societal supervision. This exchange underscored the importance of addressing ethical considerations in the development and implementation of AI technologies.

  • a man wearing a suit and tie smiling at the camera

    Meng Tianguang

    Meng Tianguang, Vice Dean and Tenured Professor at the School of Social Sciences, Tsinghua University.

Lecture 6: Shifting Winds, Changing Tides: Emerging Issues for Market Competition in the Next Phase of AI Evolution (Date: 2023/5/18)

  • The recent splash made by Transformer-based generative AI has spurred new discourse surrounding its potential impact on market competition. There are those who argue that we are heading towards natural monopolies in relevant markets due to the resource-intensive nature of training and moderating generative AI and other related intelligent systems. Others see glimmers of hope that AI-based innovation will manage to upend current dominance in digital markets; yet some take a more nuanced view, arguing that market disruption, if any, will primarily come from incumbents, thereby signifying a shift in focus from accuracy to computational efficiency.

  • a person posing for the camera

    Yong LIM

    Yong LIM is an Associate Professor at Seoul National University, School of Law, where he also served as Associate Dean of Student Affairs until 2020.

Lecture 7: Heterogeneity of AI-Induced Societal Harms and the Failure of Omnibus AI Laws (Date: 2023/5/18)

  • Trustworthy AI discourses postulate the homogeneity of AI systems, aim to derive common causes regarding the harms they generate, and demand uniform human interventions. Such “AI monism” has spurred legislation for omnibus AI laws requiring any “high-risk” AI systems to comply with a full, uniform package of rules on fairness, transparency, accountability, human oversight, accuracy, robustness, and security, as demonstrated by the EU’s draft AI Regulation, the U.S.’s draft Algorithmic Accountability Act of 2022, and Korea’s AI Bill.

    However, it is irrational to require “high-risk” or critical AIs to comply with all the safety, fairness, accountability, and privacy regulations when it is possible to separate AIs entailing safety risks (safety risks (robots and other intelligent agents), biases (discriminative models), infringements (generative models), and accuracy/robustness/privacy problems (cognitive models). Alternatively, I propose the following four initial categorizations, subject to ongoing empirical reassessments:

    1. Intelligent Agents: For self-driving cars, safety regulations must be adapted to address incremental accident risks arising from autonomous behavior through test tracking, safety warnings, event data recording, and kill switches.
    2. Discriminative Models: For models like credit-scoring or hiring AI, law must focus on mitigating allocative harms and disclosing the marginal effects of immutable features such as race and gender.
    3. Generative Models: For language models or AI-powered content creation, law should optimize developers’ liability for data mining and content generation, balancing potential social harms arising from infringing content and the negative impact of excessive moderation.
    4. Identification and AI Diagnostics: Quality of service related to safety should effectively address privacy, surveillance, and security issues.
  • a person posing for the camera

    Sangchul PARK

    Sangchul PARK is an Assistant Professor at Seoul National University, School of Law, with joint appointments at the Interdisciplinary Program in AI and the Department of Mathematical Science.

Lecture 8: Computing Based on Societal Thinking (Date: 2023/11/9)

  • This speech will discuss how to integrate big data and AI methods with a focus on social issues. We will explore two research and practical applications in this area. The first case involves examining how the yin-yang theory in Chinese philosophy can be applied to understand the emergence of collective intelligence within teams in an enterprise. Two factors—high knowledge diversity and tight collaboration networks within the team—are inherently contradictory. A dense network tends to bring about homogeneity, while diversity fosters a sparse network. However, in actual big data analysis, we can observe that the impact of these two factors on the emergence of collective intelligence is sometimes reinforcing and sometimes restraining. We will address the question of how to promote reinforcement and avoid restraint.

    Another case explores how large language models can be used in the governance of local government to assist in ‘one-click reporting’ and provide ‘one-click assessments’ of various NGOs and social workers. The goal is to use AI to replace form-filling and truly reduce the workload in community governance.

  • a man smiling for the camera

    Jar-Der Luo

    Jar-Der Luo is a Joint Appointed Professor at Tsinghua University (Beijing), Chief Editor of the Journal of Social Computing, and PI at the Tsinghua U. Computational Social Sciences & National Governance Lab.

    a person posing for the camera

    Yuanyi Zhen

    Yuanyi Zhen is a Ph.D. candidate in the Department of Sociology at Tsinghua University, focusing on the Science of Science, Social Computing, and Complex Social Theories.

Lecture 9: Can Large Language Models Transform Computational Social Science? (Date: 2024/7/5)

  • Large language models (LLMs) provide great opportunities for analyzing text data at scale and have transformed the way humans interact with AI systems in a wide range of fields and disciplines. This talk shares two distinct approaches to how LLMs can influence and potentially transform computational social science research.

    The first part analyzes the zero-shot performance of 13 LLMs on 24 representative computational social science benchmarks to provide a roadmap for using LLMs as computational social science tools. The second part explores social skill training with LLMs, presenting how we use LLMs to teach conflict resolution skills through simulated practice. We conclude by discussing concerns about using LLMs in the social sciences and offering recommendations on how to address them.

  • a person smiling for the camera

    Diyi Yang

    Diyi Yang is an assistant professor in the Computer Science Department at Stanford University, affiliated with the Stanford NLP Group, Stanford HCI Group, and Stanford Human Centered AI Institute.

Lecture 10: From Leaderboards to Operating Conditions (Date: 2024/7/10)

  • AI evaluation is much more than benchmarks, metrics, and leaderboards. It should also be much more, and much better, than ‘evals’. This talk will cover the state of AI evaluation through three major obstacles:

    1. Diverse Paradigms: There are very different paradigms and communities that often talk past each other: 1) the TEVV (testing, evaluation, verification, and validation) school, 2) the benchmark school, 3) the ‘evals’ school, and 4) the cognitive school.
    2. Understanding Capability: There is limited understanding of what capability means and how to measure it, as opposed to performance.
    3. Predictability Focus: There is little explicit recognition that AI evaluation is mostly about predictability: shifting from the question “is it accurate or safe in general?” to “will it work for this operating condition?”

    Understanding AI evaluation as a prediction problem clarifies research challenges and opportunities, leading to the goal of making Predictable AI a reality.

  • a man looking at the camera

    José Hernández-Orallo

    José Hernández-Orallo is Professor at the Universitat Politècnica de València, Spain, and Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK.

Lecture 11: The Promise and Peril of AI Stand-ins for Social Agents and Interactions (Date: 2024/9/6)

  • Large Language Models (LLMs), through their exposure to massive collections of online text, learn the ability to reproduce the perspectives and linguistic styles of diverse social and cultural groups. This capability suggests a powerful social scientific application—the simulation of empirically realistic, culturally situated human subjects.

    Synthesis of recent research in artificial intelligence and computational social science outlines a methodological foundation for simulating human subjects and their social interactions. I then identify nine characteristics of current models that impair realistic human simulation, including atemporality, social acceptability bias, response uniformity, and poverty of sensory experience. For each of these areas, I explore promising approaches to overcome their associated shortcomings. I conclude with a discussion of technological implications and ethical considerations. Given the rapid changes in these models, I advocate for an ongoing methodological program on the simulation of human subjects and collectives that keeps pace with technical progress.

  • James Evans wearing a suit and tie

    James Evans

    James Evans is the Director of the Knowledge Lab and a Professor of Sociology at the University of Chicago, also serving as Faculty Director of the Computational Social Science program.