Pattern 10C: Fall back to other strategies

Problem

The AI’s uncertainty level is so high that it is unable to take an action.

Solution

Enable the AI system to fall back to other strategies so the interaction with the user can proceed.

Use when

  • Other strategies for accomplishing the same user goals are available.
  • Strategies for fallback are available for situations such as:
    • Safety-critical applications
    • Low-latency applications
    • Legal agreements requirements for system up-time

How

Collaborate with an AI/ML practitioner to:

  • Get information about the system’s performance and confidence levels.
  • Get information about the system’s performance or the confidence threshold above which system failure is more likely or more costly for the user.

When the system hits the determined threshold, fall back to another automated solutions or to a human (human-in-the-loop).

Automated solutions may include a previous technology, or a model that includes simpler versions of itself.

Fall back to automated solutions may not be noticeable to the user if the difference between the default and the fallback mechanisms is small.

If the fallback is noticeable, consider whether it is appropriate to inform the user about the fallback method or whether to enable the user to make an explicit switch to the fallback method. For example, a user might choose to trade the accuracy of a newer method for the familiarity of a previous one. Informing or providing the optional switch to the user might be particularly appropriate in cases such as: early or Beta versions, high-stakes scenarios, for power users, and so on.

Human-in-the-loop solutions include handing off system control to either the user directly or to user support.

Inform the user of the hand off, giving enough time for consent or reaction.

User benefits

  • Enables users to continue using the system and accomplish their goals in the case of AI failures.
  • May make the system usable in a broader range of operating environments (e.g., offline).
  • May increase safety.

Common pitfalls

  • The fall back does not mitigate the initial issue either because it happens too late or because it is unable to.
  • Employing fall back strategies frequently disrupts the interaction with the AI system if the fall back is noticeable.
  • When handing off to a human, hand-off signals are unclear, unnoticeable, or delivered too late.
  • It is not clear to the user how to take over from the system.
  • It is not clear to the user what action to take after taking over from the system.

​​​​​​​References

Examples