Multi-market Energy Optimization with Renewables via Reinforcement Learning

This paper introduces a deep reinforcement learning (RL) framework for optimizing the operations of power plants pairing renewable energy with storage. The objective is to maximize revenue from energy markets while minimizing storage degradation costs and renewable curtailment. The framework handles complexities such as time coupling by storage devices, uncertainty in renewable generation and energy prices, and non-linear storage models. The study treats the problem as a hierarchical Markov Decision Process (MDP) and uses component-level simulators for storage. It utilizes RL to incorporate complex storage models, overcoming restrictions of optimization-based methods that require convex and differentiable component models. A significant aspect of this approach is ensuring policy actions respect system constraints, achieved via a novel method of projecting potentially infeasible actions onto a safe state-action set. The paper demonstrates the efficacy of this approach through extensive experiments using data from US and Indian electricity markets, comparing the learned RL policies with a baseline control policy and a retrospective optimal control policy. It validates the adaptability of the learning framework with various storage models and shows the effectiveness of RL in a complex energy optimization setting, in the context of multi-market bidding, probabilistic forecasts, and accurate storage component models.

 

https://arxiv.org/abs/2306.08147 (opens in new tab)