{"id":875082,"date":"2022-09-26T10:51:39","date_gmt":"2022-09-26T17:51:39","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=875082"},"modified":"2022-12-06T19:42:27","modified_gmt":"2022-12-07T03:42:27","slug":"responsible-ai-an-interdisciplinary-approach-workshop","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/responsible-ai-an-interdisciplinary-approach-workshop\/","title":{"rendered":"Responsible AI: An Interdisciplinary Approach Workshop"},"content":{"rendered":"\n\n\n\n\n

When studying responsible AI (artificial intelligence), most of the time we are studying its impact on people and society. Sociologists, psychologists, and media scientists have long-term accumulation and research results in these areas. When we talk about fairness, we would better work with sociologists to analyze how AI could lead to stratification of society and polarization of people’s opinions. As we study interpretability, we also hope to discuss with psychologists why people essentially need more transparent models, and how to best show the inner mechanisms of AI models. Communication scientists can help us gain a deeper understanding of AI models used in information distribution. From another perspective, we are also very interested in the application of responsible AI in these interdisciplinary areas, to help solve their problems. In this workshop, we invited researchers from different disciplines to discuss with us how we can jointly advance research in responsible AI.<\/p>\n\n\n\n

Speakers<\/h2>\n\n\n\n
\n
\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
James A. Evans<\/a><\/h5>\n\n\n\n

Professor, Director of Knowledge Lab
The University of Chicago<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/figure>\n\n\n\n
<\/div>\n\n\n\n
Pascale Fung (opens in new tab)<\/span><\/a><\/h5>\n\n\n\n

Professor
Hong Kong University of Science & Technology<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Rui Guo<\/a><\/h5>\n\n\n\n

Associate Professor of Law
Renmin University of China<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Fang Luo<\/a><\/h5>\n\n\n\n

Professor
Beijing Normal University<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Beibei Shi<\/a><\/h5>\n\n\n\n

Senior Research Program Manager
Microsoft Research Asia<\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n

\n
\n
\"David (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
David Stillwell<\/a><\/h5>\n\n\n\n

Professor
University of Cambridge Judge Business School<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Xiaohong Wan<\/a><\/h5>\n\n\n\n

Professor
Beijing Normal University<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"Xiting<\/figure>\n\n\n\n
<\/div>\n\n\n\n
Xiting Wang<\/a><\/h5>\n\n\n\n

Principal Researcher
Microsoft Research Asia<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"Fangzhao<\/figure>\n\n\n\n
<\/div>\n\n\n\n
Fangzhao Wu<\/a><\/h5>\n\n\n\n

Principal Researcher
Microsoft Research Asia<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Xing Xie<\/a><\/h5>\n\n\n\n

Senior Principal Research Manager
Microsoft Research Asia<\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n

\n
\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Yongfeng Zhang<\/a><\/h5>\n\n\n\n

Assistant Professor
Rutgers University<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Lidong Zhou<\/a><\/h5>\n\n\n\n

Corporate Vice President, Managing Director
Microsoft Research Asia<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Jonathan Zhu<\/a><\/h5>\n\n\n\n

Chair Professor of Computational Social Science
City University of Hong Kong<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Jun Zhu<\/a><\/h5>\n\n\n\n

Bosch AI Professor
Tsinghua University<\/em><\/p>\n<\/div>\n\n\n\n

<\/div>\n<\/div>\n\n\n\n
<\/div>\n\n\n\n

Agenda<\/h2>\n\n\n\n
Time<\/th>Session<\/th><\/tr><\/thead>
8:30\u20138:40<\/td>Opening remarks | video (opens in new tab)<\/span><\/a>
Lidong Zhou (opens in new tab)<\/span><\/a>, Microsoft Research Asia<\/em><\/td><\/tr>
8:40\u20138:45<\/td>Group photo
All speakers and attendees<\/em><\/td><\/tr>
8:45\u20139:15<\/td>Keynote: Responsible AI Research at MSR Asia | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Xing Xie (opens in new tab)<\/span><\/a>, Microsoft Research Asia<\/em>

Abstract: With the rapid development of artificial intelligence, its social responsibility has received extensive attention. In this talk, I will present our recent research that we have conducted to solve important problems such as privacy protection, explainability, and ethics in AI. In particular, I will describe how we design methods to address the challenges posed by big models, including the huge overhead of computation and communication, and the impact of model complexity on its transparency and bias. At the same time, I will also share some of our thoughts on the crucial role of interdisciplinary research in this field.<\/td><\/tr>
<\/td>Session 1: Social Impact of AI<\/strong>
Chair: Beibei Shi, Microsoft Research Asia<\/em><\/td><\/tr>
9:15\u20139:45<\/td>Research Talk: Towards Human Value Based NLP | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Pascale Fung (opens in new tab)<\/span><\/a>, Hong Kong University of Science & Technology<\/em>

Abstract: The AI \u201carms race\u201d has reached a point where different organizations in different countries are competing to build ever larger \u201clanguage\u201d models in text, in speech, in image and so on, trained from ever larger collections of databases. Our society in general, our users in particular, are demanding that AI technology be more responsible \u2013 more robust, fairer, more explainable, more trustworthy. Natural language processing technologies built on top of these large pre-trained language models are expected to align with these and other human \u201cvalues\u201d in because they impact our lives directly. The core challenge of \u201cvalue-aligned\u201d NLP (or AI in general) is twofold: 1) What are these values and who defines them? 2) How can NLP algorithms and models be made to align with these values? In fact, different cultures and communities might have different approaches to ethical issues. Even when people from different cultures happen to agree on a set of common principles, they might disagree on the implementation of such principles. It is therefore necessary that we anticipate value definition to be dynamic and multidisciplinary. I propose that we should modularize the set of value definitions as external to the development of NLP algorithms, and that of large pretrained language models and encapsulate the language model to preserve its integrity. We also argue that value definition should not be left in the hands of NLP\/AI researchers or engineers. At best, we can be involved at the stage of value definition but we engineers and developers should not be decision makers on what they should be. In addition, some values are now enshrined in legal requirements. This argues further that value definition should be disentangled from algorithm and model development. In this talk, I will present initial experiments on value based NLP where we allow the input to an NLP system to have human defined values or ethical principles for different output results. I propose that many NLP tasks, from classification to generation, should output results according to human defined principles for better performance and explainability.<\/td><\/tr>
9:45\u201310:15<\/td>Research Talk: The Long March Towards AI Fairness | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Rui Guo (opens in new tab)<\/span><\/a>, Renmin University of China<\/em>

Abstract: To protect people from unfair treatment or discrimination, conventional wisdom from the legal academia points to certain protected factors or social group categories to identify and prevent prohibited behaviors or biases. This has caused problems in the context of Artificial Intelligence (AI). This talk uses an example of disability discrimination to highlight the role of stereotypes and the difficulty to achieve AI fairness. Dealing with stereotypes requires our deeper reflection on the problem of moral agency in AI.<\/td><\/tr>
10:15\u201310:45<\/td>Research Talk: On the Adversarial Robustness of Deep Learning | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Jun Zhu (opens in new tab)<\/span><\/a>, Tsinghua University\u00a0<\/em>

Abstract: Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.<\/td><\/tr>
<\/td>Session 2: Responsible AI: an Interdisciplinary Approach<\/strong>
Chair: Xing Xie, Microsoft Research Asia<\/em><\/td><\/tr>
10:45\u201312:00<\/td>Panel Discussion | video (opens in new tab)<\/span><\/a>
Host:
Xing Xie (opens in new tab)<\/span><\/a>, Microsoft Research Asia<\/em>
Panelists:
Pascale Fung (opens in new tab)<\/span><\/a>, Hong Kong University of Science & Technology
<\/em>
Rui Guo (opens in new tab)<\/span><\/a>, Renmin University of China
<\/em>
Jun Zhu (opens in new tab)<\/span><\/a>, Tsinghua University
<\/em>
Jonathan Zhu (opens in new tab)<\/span><\/a>, City University of Hong Kong
<\/em>
Xiaohong Wan (opens in new tab)<\/span><\/a>, Beijing Normal University<\/em><\/td><\/tr>
<\/td>Session 3: Responsibility in Personalization<\/strong>
Chair: Fangzhao Wu, Microsoft Research Asia<\/em><\/td><\/tr>
14:00\u201314:30<\/td>Research Talk: Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Yongfeng Zhang (opens in new tab)<\/span><\/a>, Rutgers University<\/em>

Abstract: As the bridge between humans and AI, recommender system is at the frontier of Human-centered AI research. However, inappropriate use or development of recommendation techniques may bring negative effects to humans and the society at large, such as user distrust due to the non-transparency of the recommendation mechanism, unfairness of the recommendation algorithm, user uncontrollability of the recommendation system, as well as user privacy risks due to the extensive use of users\u2019 private data for personalization. In this talk, we will discuss how to build trustworthy recommender systems along the progress that recommendation algorithms advance from shallow models to deep models to large models, including but not limited to the unique role of recommender system research in the AI community as a representative Subjective AI task, the relationship between Subjective AI and trustworthy computing, as well as typical recommendation methods on different perspectives of trustworthy computing, such as causal and counterfactual reasoning, neural-symbolic modeling, natural language explanations, federated learning, user controllable recommendation, echo chamber mitigation, personalized prompt learning, and beyond.<\/td><\/tr>
14:30\u201315:00<\/td>Research Talk: Evidence-based Evaluation for Responsible AI | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Jonathan Zhu (opens in new tab)<\/span><\/a>, City University of Hong Kong<\/em>

Abstract: Current efforts on responsible AI have focused on why AI should be socially responsible and how to produce responsible AI. An equally important question that hasn\u2019t been adequately addressed is how responsible the deployed AI products are. The question is ignored most of the time, occasionally answered by anecdotal evidence or casual evaluation. We need to understand that good evaluations are not easy, quick, or cheap to carry out. On the contrary, good evaluations rely on evidence that are systematically collected based on proven methods, completely independent from the process, data, and even research staff responsible for the relevant AI products. The evidence-based medicine practice over the last two decades has provided a relevant and informative role model for the AI industry to follow.<\/td><\/tr>
15:00\u201315:30<\/td>Research Talk: Personalizing Responsibility within AI Systems: A Case for Designing Diversity | video (opens in new tab)<\/span><\/a>
James Evans (opens in new tab)<\/span><\/a>, The University of Chicago<\/em>

Abstract: Here I explore the importance of personalizing our assessment of particular humans’ values, objectives, and constraints, both at the outset of a task, and ongoingly in systems trusted to augment human capacity. Moreover, augmenting human capacity requires augmenting human perspectives. The wisdom of crowds hinges on the independence and diversity of their members\u2019 information and approach. Here I explore how the wisdom of scientific, technological and business crowds for sustained performance and advance operate through a process of collective abduction\u2014the collision of deduction and induction\u2014wherein unexpected findings stimulate innovators to forge new insights to make the surprising unsurprising. Drawing on tens of millions of research papers and patents across the life sciences, physical sciences and patented inventions, here I show that surprising designs and discoveries are the best predictor of outsized success and that surprising advances systematically emerge across, rather than within researchers or teams; most commonly when innovators from one field surprisingly publish problem-solving results to an audience in a distant and diverse other. This scales insights from my prior work that shows how across innovators, teams and fields, connection and conformity is associated with reduced replication and impeded innovation. Using these principles, I simulate processes of scientific and technological search to demonstrate the relationship between crowded fields and constrained collective inferences, and I illustrate how inverting the traditional artificial intelligence approach to avoid rather than mimic human search enables the design of trusted diversity that systematically violates established field boundaries and is associated with marked success of innovation predictions. I conclude with a discussion of prospects and challenges in a connected age for trusted and sustainable augmentation through the design and preservation of personalized difference.<\/td><\/tr>
<\/td>Session 4: Interpretability and Psychology<\/strong>
Chair: Xiting Wang, Microsoft Research Asia<\/em><\/td><\/tr>
15:30\u201316:00<\/td>Research Talk: Personality Predictions from Automated Video Interviews: Explainable or Unexplainable Models? | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
David Stillwell (opens in new tab)<\/span><\/a>, University of Cambridge<\/em>

Abstract: In automated video interviews (AVIs), candidates answer pre-set questions by recording responses on camera and then interviewers use them to guide their hiring decisions. To reduce the burden on interviewers, AVI companies commonly use black-box algorithms to assess the quality of responses, but little academic research has reported on their accuracy. We collected 694 video interviews (200 hours) and self-reported Big Five personality. In Study 1 we use machine learning to predict personality from 1,710 verbal, facial, and audio features. In Study 2, we use a subset of 653 intuitively understandable features to build an explainable model using ridge regression. We report the accuracies of both models and opine on the question of whether it would be better to use an explainable algorithm.<\/td><\/tr>
16:00\u201316:30<\/td>Research Talk: Interpretability, Responsibility and Controllability of Human Behaviors | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Xiaohong Wan (opens in new tab)<\/span><\/a>, Beijing Normal University<\/em>

Abstract: When judging whether a man should take his responsibility for his behavior, the judger often evaluates whether his behavior is interpretable and under his controllability. However, it is difficult to evaluate such quantities from external observers, as the processes and internal states inside the brain are intangible. Furthermore, it is also difficult to evaluate these internal states to details and their causality by himself, even as the owner of the behaviors. Many of human behaviors are driven by fast and intuitive processes, leaving post-hoc explanations of these processes. Even for those controlled processes, the explanations remain largely unclear. In this talk, I would like to discuss these issues in terms of neural mechanisms underlying human behaviors.<\/td><\/tr>
16:30\u201317:00<\/td>Research Talk: Development of a Game-Based Assessment to Measure Creativity | video (opens in new tab)<\/span><\/a> | slides (opens in new tab)<\/span><\/a>
Fang Luo (opens in new tab)<\/span><\/a>, Beijing Normal University<\/em>

Abstract: Creativity measurement is the basis of creativity research. For a long time, traditional creativity tests have many limitations. First of all, traditional tests place too much emphasis on ‘novelty’ and ignore ‘suitability’. Second, divergent thinking is equated with creative thinking. Third, the evaluation index is single. Fourth, the traditional test tasks are simple and abstract, divorced from real problems, and lack of ecological validity. The purpose of this study was to develop a game-based assessment to measure creativity, collect log-file data, and realize the assessment of various creative thinking abilities. The creativity game test used the ‘Design of Evidence Centers’ as a framework to construct three problem situations around ‘prehistoric human life’ and participants ‘synthesized’ creative solutions by combining cards. The log-file data and criterion test data of 515 college students were collected. Study one showed that the test had good psychometric properties. In the second study, a Bayesian network was constructed with key operations as nodes to explore the influence of individual insight level on game response, so as to provide evidence for construct validity. The answering experience of the creativity game test was highly evaluated by the participants, which reflected the advantages and prospects of the game test.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n
<\/div>\n\n\n\n

Workshop organizers<\/h2>\n\n\n\n

Xing Xie<\/a> (Chair), Microsoft Research Asia
Beibei Shi<\/a> (Chair), Microsoft Research Asia
Xiting Wang<\/a> (Chair), Microsoft Research Asia
Fangzhao Wu<\/a> (Chair), Microsoft Research Asia 
Weizhe Shi, Microsoft Research Asia 
Xiaoyuan Yi,<\/a> Microsoft Research Asia 
Bin Zhu,<\/a> Microsoft Research Asia 
Dongsheng Li,<\/a> Microsoft Research Asia 
Jindong Wang,<\/a> Microsoft Research Asia <\/p>\n\n\n\n

<\/div>\n\n\n\n

Microsoft\u2019s Event Code of Conduct<\/h4>\n\n\n\n

Microsoft\u2019s mission is to empower every person and every organization on the planet to achieve more. This includes events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location. <\/p>\n\n\n\n

We do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft\u2019s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct (opens in new tab)<\/span><\/a>. In short, the entire experience at the venue must meet our culture standards. We encourage everyone to assist in creating a welcoming and safe environment. Please report (opens in new tab)<\/span><\/a> any concerns, harassing behavior, or suspicious or disruptive activity to venue staff, the event host or owner, or event staff. Microsoft reserves the right to refuse admittance to or remove any person from company-sponsored events at any time in its sole discretion.<\/p>\n\n\n\n

\n
Report a concern<\/a><\/div>\n<\/div>\n\n\n","protected":false},"excerpt":{"rendered":"

When studying responsible AI (artificial intelligence), most of the time we are studying its impact on people and society. Sociologists, psychologists, and media scientists have long-term accumulation and research results in these areas. When we talk about fairness, we would better work with sociologists to analyze how AI could lead to stratification of society and […]<\/p>\n","protected":false},"featured_media":874611,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2022-10-24","msr_enddate":"","msr_location":"Virtual","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"China Standard Time (GMT+8)","msr_hide_region":false,"msr_private_event":true,"footnotes":""},"research-area":[13556,13559],"msr-region":[],"msr-event-type":[],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-875082","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-social-sciences","msr-locale-en_us"],"msr_about":"\n\n\n\n\n

When studying responsible AI (artificial intelligence), most of the time we are studying its impact on people and society. Sociologists, psychologists, and media scientists have long-term accumulation and research results in these areas. When we talk about fairness, we would better work with sociologists to analyze how AI could lead to stratification of society and polarization of people's opinions. As we study interpretability, we also hope to discuss with psychologists why people essentially need more transparent models, and how to best show the inner mechanisms of AI models. Communication scientists can help us gain a deeper understanding of AI models used in information distribution. From another perspective, we are also very interested in the application of responsible AI in these interdisciplinary areas, to help solve their problems. In this workshop, we invited researchers from different disciplines to discuss with us how we can jointly advance research in responsible AI.<\/p>\n\n\n\n

Speakers<\/h2>\n\n\n\n
\n
\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
James A. Evans<\/a><\/h5>\n\n\n\n

Professor, Director of Knowledge Lab
The University of Chicago<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/figure>\n\n\n\n
<\/div>\n\n\n\n
Pascale Fung<\/a><\/h5>\n\n\n\n

Professor
Hong Kong University of Science & Technology<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Rui Guo<\/a><\/h5>\n\n\n\n

Associate Professor of Law
Renmin University of China<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Fang Luo<\/a><\/h5>\n\n\n\n

Professor
Beijing Normal University<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Beibei Shi<\/a><\/h5>\n\n\n\n

Senior Research Program Manager
Microsoft Research Asia<\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n

\n
\n
\"David<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
David Stillwell<\/a><\/h5>\n\n\n\n

Professor
University of Cambridge Judge Business School<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Xiaohong Wan<\/a><\/h5>\n\n\n\n

Professor
Beijing Normal University<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"Xiting<\/figure>\n\n\n\n
<\/div>\n\n\n\n
Xiting Wang<\/a><\/h5>\n\n\n\n

Principal Researcher
Microsoft Research Asia<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"Fangzhao<\/figure>\n\n\n\n
<\/div>\n\n\n\n
Fangzhao Wu<\/a><\/h5>\n\n\n\n

Principal Researcher
Microsoft Research Asia<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Xing Xie<\/a><\/h5>\n\n\n\n

Senior Principal Research Manager
Microsoft Research Asia<\/em><\/p>\n<\/div>\n<\/div>\n\n\n\n

\n
\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Yongfeng Zhang<\/a><\/h5>\n\n\n\n

Assistant Professor
Rutgers University<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Lidong Zhou<\/a><\/h5>\n\n\n\n

Corporate Vice President, Managing Director
Microsoft Research Asia<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Jonathan Zhu<\/a><\/h5>\n\n\n\n

Chair Professor of Computational Social Science
City University of Hong Kong<\/em><\/p>\n<\/div>\n\n\n\n

\n
\"headshot<\/a><\/figure>\n\n\n\n
<\/div>\n\n\n\n
Jun Zhu<\/a><\/h5>\n\n\n\n

Bosch AI Professor
Tsinghua University<\/em><\/p>\n<\/div>\n\n\n\n

<\/div>\n<\/div>\n\n\n\n
<\/div>\n\n\n\n

Agenda<\/h2>\n\n\n\n
Time<\/th>Session<\/th><\/tr><\/thead>
8:30\u20138:40<\/td>Opening remarks | video<\/a>
Lidong Zhou<\/a>, Microsoft Research Asia<\/em><\/td><\/tr>
8:40\u20138:45<\/td>Group photo
All speakers and attendees<\/em><\/td><\/tr>
8:45\u20139:15<\/td>Keynote: Responsible AI Research at MSR Asia | video<\/a> | slides<\/a>
Xing Xie<\/a>, Microsoft Research Asia<\/em>

Abstract: With the rapid development of artificial intelligence, its social responsibility has received extensive attention. In this talk, I will present our recent research that we have conducted to solve important problems such as privacy protection, explainability, and ethics in AI. In particular, I will describe how we design methods to address the challenges posed by big models, including the huge overhead of computation and communication, and the impact of model complexity on its transparency and bias. At the same time, I will also share some of our thoughts on the crucial role of interdisciplinary research in this field.<\/td><\/tr>
<\/td>Session 1: Social Impact of AI<\/strong>
Chair: Beibei Shi, Microsoft Research Asia<\/em><\/td><\/tr>
9:15\u20139:45<\/td>Research Talk: Towards Human Value Based NLP | video<\/a> | slides<\/a>
Pascale Fung<\/a>, Hong Kong University of Science & Technology<\/em>

Abstract: The AI \u201carms race\u201d has reached a point where different organizations in different countries are competing to build ever larger \u201clanguage\u201d models in text, in speech, in image and so on, trained from ever larger collections of databases. Our society in general, our users in particular, are demanding that AI technology be more responsible \u2013 more robust, fairer, more explainable, more trustworthy. Natural language processing technologies built on top of these large pre-trained language models are expected to align with these and other human \u201cvalues\u201d in because they impact our lives directly. The core challenge of \u201cvalue-aligned\u201d NLP (or AI in general) is twofold: 1) What are these values and who defines them? 2) How can NLP algorithms and models be made to align with these values? In fact, different cultures and communities might have different approaches to ethical issues. Even when people from different cultures happen to agree on a set of common principles, they might disagree on the implementation of such principles. It is therefore necessary that we anticipate value definition to be dynamic and multidisciplinary. I propose that we should modularize the set of value definitions as external to the development of NLP algorithms, and that of large pretrained language models and encapsulate the language model to preserve its integrity. We also argue that value definition should not be left in the hands of NLP\/AI researchers or engineers. At best, we can be involved at the stage of value definition but we engineers and developers should not be decision makers on what they should be. In addition, some values are now enshrined in legal requirements. This argues further that value definition should be disentangled from algorithm and model development. In this talk, I will present initial experiments on value based NLP where we allow the input to an NLP system to have human defined values or ethical principles for different output results. I propose that many NLP tasks, from classification to generation, should output results according to human defined principles for better performance and explainability.<\/td><\/tr>
9:45\u201310:15<\/td>Research Talk: The Long March Towards AI Fairness | video<\/a> | slides<\/a>
Rui Guo<\/a>, Renmin University of China<\/em>

Abstract: To protect people from unfair treatment or discrimination, conventional wisdom from the legal academia points to certain protected factors or social group categories to identify and prevent prohibited behaviors or biases. This has caused problems in the context of Artificial Intelligence (AI). This talk uses an example of disability discrimination to highlight the role of stereotypes and the difficulty to achieve AI fairness. Dealing with stereotypes requires our deeper reflection on the problem of moral agency in AI.<\/td><\/tr>
10:15\u201310:45<\/td>Research Talk: On the Adversarial Robustness of Deep Learning | video<\/a> | slides<\/a>
Jun Zhu<\/a>, Tsinghua University\u00a0<\/em>

Abstract: Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.<\/td><\/tr>
<\/td>Session 2: Responsible AI: an Interdisciplinary Approach<\/strong>
Chair: Xing Xie, Microsoft Research Asia<\/em><\/td><\/tr>
10:45\u201312:00<\/td>Panel Discussion | video<\/a>
Host:
Xing Xie<\/a>, Microsoft Research Asia<\/em>
Panelists:
Pascale Fung<\/a>, Hong Kong University of Science & Technology
<\/em>
Rui Guo<\/a>, Renmin University of China
<\/em>
Jun Zhu<\/a>, Tsinghua University
<\/em>
Jonathan Zhu<\/a>, City University of Hong Kong
<\/em>
Xiaohong Wan<\/a>, Beijing Normal University<\/em><\/td><\/tr>
<\/td>Session 3: Responsibility in Personalization<\/strong>
Chair: Fangzhao Wu, Microsoft Research Asia<\/em><\/td><\/tr>
14:00\u201314:30<\/td>Research Talk: Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models | video<\/a> | slides<\/a>
Yongfeng Zhang<\/a>, Rutgers University<\/em>

Abstract: As the bridge between humans and AI, recommender system is at the frontier of Human-centered AI research. However, inappropriate use or development of recommendation techniques may bring negative effects to humans and the society at large, such as user distrust due to the non-transparency of the recommendation mechanism, unfairness of the recommendation algorithm, user uncontrollability of the recommendation system, as well as user privacy risks due to the extensive use of users\u2019 private data for personalization. In this talk, we will discuss how to build trustworthy recommender systems along the progress that recommendation algorithms advance from shallow models to deep models to large models, including but not limited to the unique role of recommender system research in the AI community as a representative Subjective AI task, the relationship between Subjective AI and trustworthy computing, as well as typical recommendation methods on different perspectives of trustworthy computing, such as causal and counterfactual reasoning, neural-symbolic modeling, natural language explanations, federated learning, user controllable recommendation, echo chamber mitigation, personalized prompt learning, and beyond.<\/td><\/tr>
14:30\u201315:00<\/td>Research Talk: Evidence-based Evaluation for Responsible AI | video<\/a> | slides<\/a>
Jonathan Zhu<\/a>, City University of Hong Kong<\/em>

Abstract: Current efforts on responsible AI have focused on why AI should be socially responsible and how to produce responsible AI. An equally important question that hasn\u2019t been adequately addressed is how responsible the deployed AI products are. The question is ignored most of the time, occasionally answered by anecdotal evidence or casual evaluation. We need to understand that good evaluations are not easy, quick, or cheap to carry out. On the contrary, good evaluations rely on evidence that are systematically collected based on proven methods, completely independent from the process, data, and even research staff responsible for the relevant AI products. The evidence-based medicine practice over the last two decades has provided a relevant and informative role model for the AI industry to follow.<\/td><\/tr>
15:00\u201315:30<\/td>Research Talk: Personalizing Responsibility within AI Systems: A Case for Designing Diversity | video<\/a>
James Evans<\/a>, The University of Chicago<\/em>

Abstract: Here I explore the importance of personalizing our assessment of particular humans' values, objectives, and constraints, both at the outset of a task, and ongoingly in systems trusted to augment human capacity. Moreover, augmenting human capacity requires augmenting human perspectives. The wisdom of crowds hinges on the independence and diversity of their members\u2019 information and approach. Here I explore how the wisdom of scientific, technological and business crowds for sustained performance and advance operate through a process of collective abduction\u2014the collision of deduction and induction\u2014wherein unexpected findings stimulate innovators to forge new insights to make the surprising unsurprising. Drawing on tens of millions of research papers and patents across the life sciences, physical sciences and patented inventions, here I show that surprising designs and discoveries are the best predictor of outsized success and that surprising advances systematically emerge across, rather than within researchers or teams; most commonly when innovators from one field surprisingly publish problem-solving results to an audience in a distant and diverse other. This scales insights from my prior work that shows how across innovators, teams and fields, connection and conformity is associated with reduced replication and impeded innovation. Using these principles, I simulate processes of scientific and technological search to demonstrate the relationship between crowded fields and constrained collective inferences, and I illustrate how inverting the traditional artificial intelligence approach to avoid rather than mimic human search enables the design of trusted diversity that systematically violates established field boundaries and is associated with marked success of innovation predictions. I conclude with a discussion of prospects and challenges in a connected age for trusted and sustainable augmentation through the design and preservation of personalized difference.<\/td><\/tr>
<\/td>Session 4: Interpretability and Psychology<\/strong>
Chair: Xiting Wang, Microsoft Research Asia<\/em><\/td><\/tr>
15:30\u201316:00<\/td>Research Talk: Personality Predictions from Automated Video Interviews: Explainable or Unexplainable Models? | video<\/a> | slides<\/a>
David Stillwell<\/a>, University of Cambridge<\/em>

Abstract: In automated video interviews (AVIs), candidates answer pre-set questions by recording responses on camera and then interviewers use them to guide their hiring decisions. To reduce the burden on interviewers, AVI companies commonly use black-box algorithms to assess the quality of responses, but little academic research has reported on their accuracy. We collected 694 video interviews (200 hours) and self-reported Big Five personality. In Study 1 we use machine learning to predict personality from 1,710 verbal, facial, and audio features. In Study 2, we use a subset of 653 intuitively understandable features to build an explainable model using ridge regression. We report the accuracies of both models and opine on the question of whether it would be better to use an explainable algorithm.<\/td><\/tr>
16:00\u201316:30<\/td>Research Talk: Interpretability, Responsibility and Controllability of Human Behaviors | video<\/a> | slides<\/a>
Xiaohong Wan<\/a>, Beijing Normal University<\/em>

Abstract: When judging whether a man should take his responsibility for his behavior, the judger often evaluates whether his behavior is interpretable and under his controllability. However, it is difficult to evaluate such quantities from external observers, as the processes and internal states inside the brain are intangible. Furthermore, it is also difficult to evaluate these internal states to details and their causality by himself, even as the owner of the behaviors. Many of human behaviors are driven by fast and intuitive processes, leaving post-hoc explanations of these processes. Even for those controlled processes, the explanations remain largely unclear. In this talk, I would like to discuss these issues in terms of neural mechanisms underlying human behaviors.<\/td><\/tr>
16:30\u201317:00<\/td>Research Talk: Development of a Game-Based Assessment to Measure Creativity | video<\/a> | slides<\/a>
Fang Luo<\/a>, Beijing Normal University<\/em>

Abstract: Creativity measurement is the basis of creativity research. For a long time, traditional creativity tests have many limitations. First of all, traditional tests place too much emphasis on 'novelty' and ignore 'suitability'. Second, divergent thinking is equated with creative thinking. Third, the evaluation index is single. Fourth, the traditional test tasks are simple and abstract, divorced from real problems, and lack of ecological validity. The purpose of this study was to develop a game-based assessment to measure creativity, collect log-file data, and realize the assessment of various creative thinking abilities. The creativity game test used the 'Design of Evidence Centers' as a framework to construct three problem situations around 'prehistoric human life' and participants 'synthesized' creative solutions by combining cards. The log-file data and criterion test data of 515 college students were collected. Study one showed that the test had good psychometric properties. In the second study, a Bayesian network was constructed with key operations as nodes to explore the influence of individual insight level on game response, so as to provide evidence for construct validity. The answering experience of the creativity game test was highly evaluated by the participants, which reflected the advantages and prospects of the game test.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n
<\/div>\n\n\n\n

Workshop organizers<\/h2>\n\n\n\n

Xing Xie<\/a> (Chair), Microsoft Research Asia
Beibei Shi<\/a> (Chair), Microsoft Research Asia
Xiting Wang<\/a> (Chair), Microsoft Research Asia
Fangzhao Wu<\/a> (Chair), Microsoft Research Asia 
Weizhe Shi, Microsoft Research Asia 
Xiaoyuan Yi,<\/a> Microsoft Research Asia 
Bin Zhu,<\/a> Microsoft Research Asia 
Dongsheng Li,<\/a> Microsoft Research Asia 
Jindong Wang,<\/a> Microsoft Research Asia <\/p>\n\n\n\n

<\/div>\n\n\n\n

Microsoft\u2019s Event Code of Conduct<\/h4>\n\n\n\n

Microsoft\u2019s mission is to empower every person and every organization on the planet to achieve more. This includes events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location. <\/p>\n\n\n\n

We do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft\u2019s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct<\/a>. In short, the entire experience at the venue must meet our culture standards. We encourage everyone to assist in creating a welcoming and safe environment. Please report<\/a> any concerns, harassing behavior, or suspicious or disruptive activity to venue staff, the event host or owner, or event staff. Microsoft reserves the right to refuse admittance to or remove any person from company-sponsored events at any time in its sole discretion.<\/p>\n\n\n\n

\n
Report a concern<\/a><\/div>\n<\/div>\n\n\n","tab-content":[],"msr_startdate":"2022-10-24","msr_enddate":"","msr_event_time":"China Standard Time (GMT+8)","msr_location":"Virtual","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"October 24, 2022","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"\"Abstract","event_excerpt":"When studying responsible AI (artificial intelligence), most of the time we are studying its impact on people and society. Sociologists, psychologists, and media scientists have long-term accumulation and research results in these areas. When we talk about fairness, we would better work with sociologists to analyze how AI could lead to stratification of society and polarization of people's opinions. As we study interpretability, we also hope to discuss with psychologists why people essentially need more…","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/875082"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":35,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/875082\/revisions"}],"predecessor-version":[{"id":1014252,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/875082\/revisions\/1014252"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/874611"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=875082"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=875082"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=875082"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=875082"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=875082"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=875082"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=875082"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=875082"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=875082"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}