{"id":564561,"date":"2019-02-01T08:59:48","date_gmt":"2019-02-01T16:59:48","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=564561"},"modified":"2019-02-01T09:51:45","modified_gmt":"2019-02-01T17:51:45","slug":"guidelines-for-human-ai-interaction-design","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/guidelines-for-human-ai-interaction-design\/","title":{"rendered":"Guidelines for human-AI interaction design"},"content":{"rendered":"

\"\"<\/p>\n

The increasing availability and accuracy of AI has stimulated uses of AI technologies in mainstream user-facing applications and services. Along with opportunities for infusing valuable AI services in a wide range of products come challenges and questions about best practices and guidelines for human-centered design. A dedicated team of Microsoft researchers addressed this need by synthesizing and validating a set of guidelines for human-AI interaction. This work marks an important step toward much-needed best practices for the complexities AI designers face.<\/p>\n

The integration of AI services such as prediction, recognition, and natural language understanding brings multiple new considerations to the fore for designers. For example, interaction designers have to grapple with rates of failure and success of AI inference, the changes in system behavior that may come with ongoing machine learning, and with the understandability and controllability of AI functions.<\/p>\n

The variability of current AI designs as well as high-profile reports of failures – ranging from the humorous, embarrassing or disruptive (for example, benign autocorrect errors) to the more serious, when users cannot effectively understand or control an AI system, (for example, accidents in semi-autonomous vehicles) – highlight opportunities for creating more intuitive and effective user experiences with AI. The ongoing conversation on human-centered design for AI systems shows that designers are hungry for trustworthy AI-centric design heuristics or guidelines.<\/p>\n

Over the last 20 years, research scientists and engineers have proposed guidelines and recommendations for designing effective interaction with AI-infused systems. Ideas span recommendations for managing user expectations, moderating the level of autonomy, supporting the resolution of ambiguity, and providing awareness about changes that may occur as the system learns about users. Unfortunately, many of these design suggestions are scattered through different publications and are rarely presented explicitly as guidelines. The Microsoft research team identified more than 150 such design recommendations, many of which captured similar ideas. By distilling and validating them into one unified set of guidelines, this work empowers the community to move forward and build on existing knowledge.<\/p>\n

\u201cThe design community didn\u2019t have a unified set of guidelines for creating intuitive interactions between humans and AI systems. We set out to create and validate one,\u201d said Saleema Amershi, lead researcher on the development of the Guidelines for Human-AI Interaction.<\/p>\n

The Guidelines for Human-AI Interaction<\/a>\u2014as well as the process for developing and validating them\u2014will be presented at the 2019 CHI Conference on Human Factors in Computing Systems in Glasgow, Scotland. The team\u2014Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz\u2014synthesized more than 20 years of knowledge and thinking in AI design spanning academia and industry into a compact set of generally applicable design guidelines for human-AI interaction.<\/p>\n

In its quest to synthesize broad and specific guidance coming from a variety of sources into a unified set of guidelines that could be universally embraced, the CHI 2019 paper, titled Guidelines for Human-AI Interaction, is also a 20th anniversary celebration of Eric Horvitz\u2019s formative CHI 1999 paper<\/a>. That paper proposed principles for smoothly weaving together human and AI capabilities and harnessing a mix of AI and human initiatives.<\/p>\n

Following a rigorous process, Microsoft researchers began by collecting more than 150 AI-related design recommendations\u2014potential guidelines\u2014from respected sources that ranged from scholarly research papers to blog posts and internal documents. Grouping recommendations by theme, the team was able to condense them into a manageable number. They then embarked on multiple rounds of evaluation with user experience (UX) and human computer interaction (HCI) experts, seeking to ensure that the guidelines were easy to understand as well as applicable to a wide range of popular AI products.<\/p>\n

\u201cWe wanted to ensure the guidelines are specific and observable at the UI level. So, we eliminated overarching principles like \u2018set expectations,\u2019 or \u2018build trust\u2019 and instead translated them into specific, actionable guidelines,\u201d said Mihaela Vorvoreanu, senior program manager.<\/p>\n

The resulting 18 guidelines for Human-AI Interaction are grouped into four sections that prescribe how an AI system should behave upon initial interaction, as the user interacts with the system, when the system is wrong, and over time. While the Guidelines for Human-AI Interaction are provided to support design decisions, they are not intended to be used as a simple checklist. The recommended guidelines are intended to support and stimulate conversations about design decisions between user experience and engineering practitioners, and to foster further research in this evolving space. The authors recognize that there will be numerous situations where AI designers must consider tradeoffs among guidelines and weigh the importance of one or more over others. Rising capabilities and use cases may suggest a need for additional guidelines.<\/p>\n

\"\"

Guideline 10: Scope services when in doubt. When AutoReplace in Word is uncertain of a correction, it engages in disambiguation by displaying multiple options the user can select from.<\/strong><\/p><\/div>\n

The guidelines were developed and tested on products with graphical user interfaces. There are opportunities to develop specific extensions or modifications of the guidelines for voice interaction, and for specialized, high-stakes uses such as semi-autonomous vehicles.<\/p>\n

\u201cWe\u2019re still in the early days of harnessing AI technologies to extend human capabilities,\u201d said Eric Horvitz, director of Microsoft Research Labs. \u201cThere is so much opportunity ahead\u2014and also many intriguing challenges. An important direction is sharing and refining sets of principles and designs about how to best integrate AI capabilities into human-computer interaction experiences.\u201d<\/p>\n

\"\"

Guideline 15: Encourage granular feedback.<\/strong>
Ideas in Excel empowers users to understand your data through high-level visual summaries, trends, and patterns. It encourages feedback on each suggestion by asking, \u201cIs this helpful?\u201d<\/strong><\/p><\/div>\n

Guidelines for human-AI interaction can extend beyond design into the realm of responsible and trustworthy AI. For example, in November 2018, a Microsoft advisory committee focused on the responsible development and application of AI technologies published a set of guidelines on the design of conversational interfaces such as chatbots and virtual assistants, Responsible Bots: 10 Guidelines for Developers of Conversational AI<\/a>. That work, and ongoing efforts on guidelines for human-AI interaction are being hosted and coordinated across Microsoft by the Aether Committee, a company-wide advisory committee on responsible AI, announced by CEO Satya Nadella<\/a> as part of the initiative to ensure the company\u2019s AI-related efforts are deeply grounded in Microsoft\u2019s core values and principles and benefit society at large. Aether hosts a set of topically focused working groups. Amershi serves as co-chair of the Aether working group on Human-AI Interaction and Collaboration.<\/p>\n

Human-AI Interaction Design Guidelines<\/strong><\/h2>\n

INITIALLY<\/h3>\n

01 Make clear what the system can do.<\/strong><\/p>\n

Help the user understand what the AI system is capable of doing.<\/p>\n

02 Make clear how well the system can do what it can do.<\/strong><\/p>\n

Help the user understand how often the AI system may make mistakes.<\/p>\n

DURING INTERACTION<\/h3>\n

03 Time services based on context.<\/strong><\/p>\n

Time when to act or interrupt based on the user\u2019s current task and environment.<\/p>\n

04 Show contextually relevant information.<\/strong><\/p>\n

Display information relevant to the user\u2019s current task and environment.<\/p>\n

05 Match relevant social norms.<\/strong><\/p>\n

Ensure the experience is delivered in a way that users would expect, given their social and cultural context.<\/p>\n

06 Mitigate social biases.<\/strong><\/p>\n

Ensure the AI system\u2019s language and behaviors do not reinforce undesirable and unfair stereotypes and biases.<\/p>\n

WHEN WRONG<\/h3>\n

07 Support efficient invocation.<\/strong><\/p>\n

Make it easy to invoke or request the AI system\u2019s services when needed.<\/p>\n

08 Support efficient dismissal.<\/strong><\/p>\n

Make it easy to dismiss or ignore undesired AI system services.<\/p>\n

09 Support efficient correction.<\/strong><\/p>\n

Make it easy to edit, refine, or recover when the AI system is wrong.<\/p>\n

10 Scope services when in doubt.<\/strong><\/p>\n

Engage in disambiguation or gracefully degrade the AI system\u2019s services when uncertain about a user\u2019s goals.<\/p>\n

11 Make clear why the system did what it did.<\/strong><\/p>\n

Enable the user to access an explanation of why the AI system behaved as it did.<\/p>\n

OVER TIME<\/h3>\n

12 Remember recent interactions.<\/strong><\/p>\n

Maintain short-term memory and allow the user to make efficient references to that memory.<\/p>\n

13 Learn from user behavior.<\/strong><\/p>\n

Personalize the user\u2019s experience by learning from their actions over time.<\/p>\n

14 Update and adapt cautiously.<\/strong><\/p>\n

Limit disruptive changes when updating and adapting the AI system\u2019s behaviors.<\/p>\n

15 Encourage granular feedback.<\/strong><\/p>\n

Enable the user to provide feedback indicating their preferences during regular interaction with the AI system.<\/p>\n

16 Convey the consequences of user actions.<\/strong><\/p>\n

Immediately update or convey how user actions will impact future behaviors of the AI system.<\/p>\n

17 Provide global controls.<\/strong><\/p>\n

Allow the user to globally customize what the AI system monitors and how it behaves.<\/p>\n

18 Notify users about changes.<\/strong><\/p>\n

Inform the user when the AI system adds or updates its capabilities.<\/p>\n

For more details and examples of each guideline, read the paper, Guidelines for Human-AI Interaction<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"

The increasing availability and accuracy of AI has stimulated uses of AI technologies in mainstream user-facing applications and services. Along with opportunities for infusing valuable AI services in a wide range of products come challenges and questions about best practices and guidelines for human-centered design. A dedicated team of Microsoft researchers addressed this need by […]<\/p>\n","protected":false},"author":37074,"featured_media":564777,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[194481],"tags":[193706,243498,200871,243501],"research-area":[13556,13545],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-564561","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-human-centered-computing","tag-ai","tag-ai-guidelines","tag-chi","tag-human-ai","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[371909,781564],"related-projects":[590602],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Saleema Amershi","user_id":33505,"display_name":"Saleema Amershi","author_link":"Saleema Amershi<\/a>","is_active":false,"last_first":"Amershi, Saleema","people_section":0,"alias":"samershi"},{"type":"user_nicename","value":"Mihaela Vorvoreanu","user_id":36804,"display_name":"Mihaela Vorvoreanu","author_link":"Mihaela Vorvoreanu<\/a>","is_active":false,"last_first":"Vorvoreanu, Mihaela","people_section":0,"alias":"mivorvor"},{"type":"user_nicename","value":"Eric Horvitz","user_id":32033,"display_name":"Eric Horvitz","author_link":"Eric Horvitz<\/a>","is_active":false,"last_first":"Horvitz, Eric","people_section":0,"alias":"horvitz"}],"msr_type":"Post","featured_image_thumbnail":"\"hands","byline":"Saleema Amershi<\/a>, Mihaela Vorvoreanu<\/a>, and Eric Horvitz<\/a>","formattedDate":"February 1, 2019","formattedExcerpt":"The increasing availability and accuracy of AI has stimulated uses of AI technologies in mainstream user-facing applications and services. Along with opportunities for infusing valuable AI services in a wide range of products come challenges and questions about best practices and guidelines for human-centered design.…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/564561"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37074"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=564561"}],"version-history":[{"count":14,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/564561\/revisions"}],"predecessor-version":[{"id":565392,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/564561\/revisions\/565392"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/564777"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=564561"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=564561"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=564561"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=564561"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=564561"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=564561"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=564561"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=564561"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=564561"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=564561"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=564561"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}