Provably Efficient Lifelong Reinforcement Learning with Linear Function Representation
- Sanae Amani ,
- Lin Yang ,
- Ching-An Cheng
ICLR 2023 |
We theoretically study lifelong reinforcement learning (RL) with linear representation in a regret minimization setting. The goal of the agent is to learn a multi-task policy based on a linear representation while solving a sequence of tasks that may be adaptively chosen based on the agent’s past behaviors. We frame the problem as a linearly parameterized contextual Markov decision process (MDP), where each task is specified by a context and the transition dynamics is context-independent, and we introduce a new completeness-style assumption on the representation which is sufficient to ensure the optimal multi-task policy is realizable under the linear representation. Under this assumption, we propose an algorithm, called UCB Lifelong Value Distillation (UCBlvd), that provably achieves sublinear regret for any sequence of tasks while using only sublinear planning calls. Specifically, for \(K\) task episodes of horizon \(H\), our algorithm has a regret bound \(\tilde{O}(\sqrt{(d^3+d’ d)H^4K})\) based on \(O(dH\log(K))\) number of planning calls, where \(d\) and \(d’\) are the feature dimensions of the dynamics and rewards, respectively. This theoretical guarantee implies that our algorithm can enable a lifelong learning agent to learn to internalize experiences into a multi-task policy and rapidly solve new tasks.