{"id":794069,"date":"2021-11-16T08:00:14","date_gmt":"2021-11-16T16:00:14","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=794069"},"modified":"2021-11-09T15:03:31","modified_gmt":"2021-11-09T23:03:31","slug":"panel-generalization-in-reinforcement-learning","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/panel-generalization-in-reinforcement-learning\/","title":{"rendered":"Panel: Generalization in reinforcement learning"},"content":{"rendered":"
The ability for a reinforcement learning (RL) policy to generalize is a key requirement for the broad application of RL algorithms. This generalization ability is also essential to the future of RL\u2014both in theory and in practice. Join Microsoft researchers Harm van Seijen, Cheng Zhang, and Mingfei Sun, along with Dr. Wendelin Boehmer from Delft University of Technology and Dr. Roberta Raileanu from New York University, as they examine how agents struggle to transfer learned policies to new environments or tasks and explore why generalization remains challenging for state-of-the-art deep RL algorithms. In addition, they will discuss open questions about the right way to think about generalization in RL, the right way to formalize the problem, and the most important tasks to be considered for generalization. Together, you will explore the importance of studying generalization in RL, the recent research progress in generalization in RL, the open challenges, and the potential research directions in this area.<\/p>\n