{"id":794090,"date":"2021-11-16T08:00:40","date_gmt":"2021-11-16T16:00:40","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=794090"},"modified":"2021-11-09T15:27:11","modified_gmt":"2021-11-09T23:27:11","slug":"research-talk-successor-feature-sets-generalizing-successor-representations-across-policies","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/research-talk-successor-feature-sets-generalizing-successor-representations-across-policies\/","title":{"rendered":"Research talk: Successor feature sets: Generalizing successor representations across policies"},"content":{"rendered":"
Successor-style representations have many advantages for reinforcement learning. For example, they can help an agent generalize from experience to new goals. However, successor-style representations are not optimized to generalize across policies\u2014typically, a limited-length list of policies is maintained and information shared among them by representation learning or generalized policy iteration. Join University of Maryland PhD candidate Kiant\u00e9 Brantley to address these limitations in successor-style representations. With collaborators from Microsoft Research Montr\u00e9al, he developed a new general successor-style representation, which brings together ideas from predictive state representations, belief space value iteration, and convex analysis. The new representation is highly expressive. For example, it allows for efficiently reading off an optimal policy for a new reward function or a policy that imitates a demonstration. Together, you\u2019ll explore the basics of successor-style representation, the challenges of current approaches, and results of the proposed approach on small, known environments.<\/p>\n