Minimax Regret for Stochastic Shortest Path
- Alon Cohen ,
- Yonathan Efroni ,
- Yishay Mansour ,
- Aviv Rosenberg
2021 Neural Information Processing Systems |
We study the Stochastic Shortest Path (SSP) problem in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent has no prior knowledge about the costs and dynamics of the model. She repeatedly interacts with the model for $K$ episodes, and has to learn to approximate the optimal policy as closely as possible. In this work we show that the minimax regret for this setting is $\widetilde O(B_\star \sqrt{|S| |A| K})$ where $B_\star$ is a bound on the expected cost of the optimal policy from any state, $S$ is the state space, and $A$ is the action space. This matches the lower bound of Rosenberg et al. (2020) up to logarithmic factors, and improves their regret bound by a factor of $\sqrt{|S|}$. Our algorithm runs in polynomial-time per episode, and is based on a novel reduction to reinforcement learning in finite-horizon MDPs. To that end, we provide an algorithm for the finite-horizon setting whose leading term in the regret depends only logarithmically on the horizon, yielding the same regret guarantees for SSP.