Decoding multitask DQN in the world of Minecraft

The 13th European Workshop on Reinforcement Learning (EWRL) 2016 |

Also presented at the 11th Women in Machine Learning Workshop and the Deep Reinforcement Learning Workshop at NeurIPS 2016.

Multitask networks that can play multiple Atari games at expert level have been successfully trained using supervised learning from several single task Deep Q Networks (DQN). However, such networks are not be able to exploit the high level similarity between games or learn common representations of game states. In fact, learned representations were shown to be separable given the game. In our work, we show that with sufficient similarity between tasks, we can train a multitask extension of DQN (MDQN) which shares representations across tasks without loss of performance. To this end, we construct a novel set of tasks with shared characteristics in Minecraft, a complex 3D world, and are able to demonstrate meaningful representation sharing between different related tasks. Sharing representations for similar tasks has paramount importance for transfer learning and lifelong learning. We envision results of this work as a stepping stone to novel lifelong learning approaches.