Importance of Directional Feedback for LLM-based Optimizers
- Allen Nie ,
- Ching-An Cheng ,
- Andrey Kolobov ,
- Adith Swaminathan
NeurIPS 2023 Foundation Models for Decision Making Workshop |
We study the potential of using large language models (LLMs) as an interactive optimizer for solving maximization problems on a text space using natural language and numerical feedback. Inspired by the classical optimization literature, we classify the natural language feedback into directional and non-directional, where the former is a generalization of the first-order feedback to the natural language space. We find that LLMs are especially capable of optimization when they are provided with {directional feedback}. Based on this insight, we design a new LLM-based optimizer that synthesizes directional feedback from the historical optimization trace to achieve reliable improvement over iterations. Empirically, we show our LLM-based optimizer is more stable and efficient in solving optimization problems, from maximizing mathematical functions to optimizing prompts for writing poems, compared with existing techniques.