Research talk: Breaking the deadly triad with a target network
The deadly triad refers to the instability of an off-policy reinforcement learning (RL) algorithm when it employs function approximation and bootstrapping simultaneously, and this is a major challenge in off-policy RL. Join PhD student Shangtong Zhang, from the WhiRL group at the University of Oxford, to learn how the target network can be used as a tool for theoretically breaking the deadly triad. Together, you’ll explore how to theoretically understand the conventional wisdom that a target network stabilizes training, a novel target network update rule that augments the commonly used Polyak-averaging style update with two projections, and how a target network can be used in linear off-policy RL algorithms, in both prediction and control settings, as well as both discounted and average-reward Markov decision processes.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
- Track:
- Reinforcement Learning
- Date:
- Speakers:
- Shangtong Zhang
- Affiliation:
- Oxford University
-
-
Shangtong Zhang
PhD Student
Oxford University
-
-
Reinforcement Learning
-
-
-
Research talk: Reinforcement learning with preference feedback
Speakers:- Aadirupa Saha
-
-
-
-
-
-
Panel: Generalization in reinforcement learning
Speakers:- Mingfei Sun,
- Roberta Raileanu,
- Harm van Seijen
-
-
Research talk: Successor feature sets: Generalizing successor representations across policies
Speakers:- Kiante Brantley
-
Research talk: Towards efficient generalization in continual RL using episodic memory
Speakers:- Mandana Samiei
-
Research talk: Breaking the deadly triad with a target network
Speakers:- Shangtong Zhang
-
Panel: The future of reinforcement learning
Speakers:- Geoff Gordon,
- Emma Brunskill,
- Craig Boutilier
-