Addressing Signal Delay in Deep Reinforcement Learning
Despite the notable advancements in deep reinforcement learning (DRL) in recent years, a prevalent issue that is often overlooked is the impact of signal delay. Signal delay occurs when there is a lag between an agent’s perception of the environment and its corresponding actions. In this paper, we first formalize delayed-observation Markov decision processes (DOMDP) by extending the standard MDP framework to incorporate signal delays. Next, we elucidate the challenges posed by the presence of signal delay in DRL, showing that trivial DRL algorithms and generic methods for partially observable tasks suffer greatly from delays. Lastly, we propose effective strategies to overcome these challenges. Our methods achieve remarkable performance in continuous robotic control tasks with large delays, yielding results comparable to those in non-delayed cases. Overall, our work contributes to a deeper understanding of DRL in the presence of signal delays and introduces novel approaches to address the associated challenges.
Publication Downloads
Algorithms to Handle Signal Delay in Deep Reinforcement Learning
January 31, 2024
Algorithms to Handle Signal Delay in Deep Reinforcement Learning aims to address the problem of signal delay in continuous robotic control. Signal delay occurs when there is a lag between an agent's perception of the environment and its corresponding actions. Our methods achieve remarkable performance in simulated continuous robotic control tasks with large delays, yielding results comparable to those in non-delayed cases.