Analysis of Multiagent Teams using Distributed POMDPs

Many current large-scale multiagent team implementations can be characterized as following the “belief-desire-intention” (BDI) paradigm, with explicit representation of team plans. Despite their promise, current BDI team approaches lack tools for quantitative performance analyses under uncertainty. Distributed partially observable Markov decision processes (POMDPs) are well suited for such analyses, but finding optimal distributed POMDP policies is highly intractable. The key contribution of this article is a hybrid BDI-POMDP approach, which exploits the positive interactions of these two approaches. In particular, BDI team plans are exploited to improve POMDP tractability and POMDP analysis improves BDI team plan performance.

Concretely, the structure within the BDI team plans can be exploited to build a factored distributed POMDP model of the domain. The distributed POMDP model can then be used to optimize key decisions within the team plan like role allocation in the presence of uncertainty. Here again the structure within the team plan can be exploited to improve the tractability of the optimization step. Further, the belief-based decision making in BDI plans yields a more efficient representation of policies and a significantly faster policy evaluation algorithm suited for our BDI-POMDP hybrid approach. As my research highlights, using a hybrid BDI-POMDP approach allows for analyzing multiagent teams in the presence of uncertainty. Exploiting the positive interactions between the two approaches helps reduce the native intractability of distributed POMDPs, enabling their use in complex dynamic domains. I will demonstrate the benefit of using this hybrid approach in two key domains. One of the domains is RoboCupRescue, where I will illustrate the significant practical improvements in allocating teams in disaster rescue simulations.

Speaker Details

Ranjit Nair is a PhD candidate at the University of Southern California, working in the TEAMCORE research under Milind Tambe. His interests include multiagent systems, decision theory and reasoning under uncertainty. During his PhD he has worked on role allocation algorithms for multiagent teams, search and rescue in urban disasters, analysis of multiagent teams and on algorithms for distributed POMDPs.

Date:
Speakers:
Ranjit Nair
Affiliation:
University of Southern California