{"id":41,"date":"2021-04-03T16:24:33","date_gmt":"2021-04-03T16:24:33","guid":{"rendered":"https:\/\/aitoolkit.test\/guideline\/scope-services-when-in-doubt\/"},"modified":"2023-05-24T19:46:48","modified_gmt":"2023-05-25T02:46:48","slug":"scope-services-when-in-doubt","status":"publish","type":"guideline","link":"https:\/\/www.microsoft.com\/en-us\/haxtoolkit\/guideline\/scope-services-when-in-doubt\/","title":{"rendered":"Scope services when in doubt"},"content":{"rendered":"
\n\t\n\n

Guideline 10: Scope services when in\u00a0doubt<\/h2>\n\n\n\n
\"yellow<\/figure>\n\n<\/div>\n\n\n\n

Engage in disambiguation or gracefully degrade the AI system\u2019s services when uncertain about a user\u2019s goals.<\/h2>\n\n\n\n

In ambiguous situations, less can be more. For example, for an AI-powered assistant that can call people on demand, if the assistant is unsure whom to call, requesting clarification (e.g., \u201cDo you mean Bill G. or Bill C.?\u201d) can be less costly than calling the wrong person. In these situations, build the AI such that it can compute its own uncertainty and use this information to gracefully degrade or scope its services when in doubt. <\/p>\n\n\n\n

Use Guideline 10 design patterns to scope services when the AI system is uncertain:<\/p>\n\n\n\n