{"id":875019,"date":"2022-09-08T06:33:31","date_gmt":"2022-09-08T13:33:31","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=875019"},"modified":"2022-11-21T08:34:11","modified_gmt":"2022-11-21T16:34:11","slug":"metaphors-for-human-ai-interaction-workshop","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/metaphors-for-human-ai-interaction-workshop\/","title":{"rendered":"Metaphors for Human-AI Interaction Workshop"},"content":{"rendered":"\n\n\n\n\n
This is an invite-only workshop. Please do not forward.<\/strong><\/p>\n\n\n\n Design for human-AI interaction has drawn on various metaphors, including the collaborating partner<\/em>, the helpful<\/em> assistant<\/em> and the co-pilot<\/em>. These metaphors tend to focus on explicit<\/em> interactions between humans and AI. However, interactions between humans and intelligent systems are also implicit (opens in new tab)<\/span><\/a>, making it difficult for users to build mental models of what the system is doing or how it does it. In this workshop, we will explore an extended set of metaphors, with the aim of facilitating (i) design and (ii) user understanding of how people work both with<\/em> and through<\/em> AI systems, as they create content and data, both intentionally and through traces of activity. <\/p>\n\n\n\n\n\n For instance, Viva Topics (opens in new tab)<\/span><\/a> is an intelligent system that builds an organisational knowledge base from content generated by organisation members, and then disseminates this across the organisation. Interactions between AI and organisation members in this case are largely implicit, and the algorithms that build the knowledge base and highlight its content to other organisation members might be understood as mediators<\/em> (opens in new tab)<\/span><\/a>, in that they mediate interactions between people and the knowledge base, and also between people and other people by connecting them through content recommendations. Another relevant metaphor is that of infrastructure<\/em> (opens in new tab)<\/span><\/a>. The pervasive and background qualities of these systems resonate with other technological infrastructures that the HCI community has considered.<\/p>\n\n\n\n Despite the infrastructural quality of Viva Topics, the output of the ML that underpins it can be foregrounded and directly edited by people. For instance, human-readable schema, produced by probabilistic programming (opens in new tab)<\/span><\/a> techniques, can be curated<\/em> by organisation members and are then stored as stable<\/em> values in the knowledge base. These representations of knowledge fold into organisational work, by forming the basis of AI-enabled recommendations (e.g., of other organisation members who are knowledgeable about a topic, or of relevant resources). In contrast, ML outputs produced by neural embedding based ML models are fluid<\/em>, being produced in response to user queries in the moment. Deep neural ML is often associated with partnership experiences such as GitHub Copilot (opens in new tab)<\/span><\/a>. While this interaction is, in many ways, explicit, it also has implicit qualities, in that human input informs the ML in ways that are not visible to its users. <\/p>\n\n\n\n Thus, different ML technologies have different implications for how metaphors can support users, designers and developers in understanding and creating intelligent systems. These metaphors may speak to both implicit and explicit qualities of interactions between people and the same ML technology. <\/p>\n\n\n\n In this workshop, we will explore the idea that expanding the repertoire of metaphors employed when developing ML systems and communicating their properties to users could: <\/p>\n\n\n\n Professor Professor of Human-Data Interaction Principal Researcher Professor and Director of UCLIC PhD student Principal Research Engineering ManagerSpeakers<\/h2>\n\n\n\n
(opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
Susanne B\u00f8dker<\/a><\/h5>\n\n\n\n
Aarhus University<\/em><\/p>\n<\/div>\n\n\n\n (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
Ewa Luger<\/a><\/h5>\n\n\n\n
University of Edinburgh<\/em><\/p>\n<\/div>\n\n\n\n (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
Andrew Rice<\/a><\/h5>\n\n\n\n
GitHub<\/em><\/p>\n<\/div>\n\n\n\n (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
Yvonne Rogers<\/a><\/h5>\n\n\n\n
UCL<\/em><\/p>\n<\/div>\n\n\n\n (opens in new tab)<\/span><\/a><\/figure>\n\n\n\n
Nur Yildirim<\/a><\/h5>\n\n\n\n
Carnegie Mellon University<\/em><\/p>\n<\/div>\n\n\n\n<\/a><\/figure>\n\n\n\n
Yordan Zaykov<\/a><\/h5>\n\n\n\n
Microsoft Research<\/em><\/p>\n<\/div>\n<\/div>\n\n\n\nAgenda<\/h2>\n\n\n\n