Diffusion for World Modeling: Visual Details Matter in Atari

  • Eloi Alonso ,
  • Adam Jelley ,
  • Vincent Micheli ,
  • A. Kanervisto ,
  • A. Storkey ,
  • Tim Pearce ,
  • Franccois Fleuret ,
  • Anssi Kanervisto ,
  • Tim Pearce

NeurIPS 2024 |

Publication | Publication

World models constitute a promising approach for training reinforcement learning agents in a safe and sample-efficient manner. Recent world models predominantly operate on sequences of discrete latent variables to model environment dynamics. However, this compression into a compact discrete representation may ignore visual details that are important for reinforcement learning. Concurrently, diffusion models have become a dominant approach for image generation, challenging well-established methods modeling discrete latents. Motivated by this paradigm shift, we introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams), a reinforcement learning agent trained in a diffusion world model. We analyze the key design choices that are required to make diffusion suitable for world modeling, and demonstrate how improved visual details can lead to improved agent performance. DIAMOND achieves a mean human normalized score of 1.46 on the competitive Atari 100k benchmark; a new best for agents trained entirely within a world model. To foster future research on diffusion for world modeling, we release our code, agents and playable world models at https://github.com/eloialonso/diamond.