Anyprefer: An Agentic Framework for Preference Data Synthesis

  • Yiyang Zhou ,
  • Zhaoyang Wang ,
  • Tianle Wang ,
  • Shangyu Xing ,
  • Peng Xia ,
  • Bo Li ,
  • Kaiyuan Zheng ,
  • Zijian Zhang ,
  • Zhaorun Chen ,
  • Wenhao Zheng ,
  • ,
  • ,
  • Weitong Zhang ,
  • Ying Wei ,
  • Mohit Bansal ,
  • Huaxiu Yao

ICLR 2025 |

PDF

High-quality preference data is essential for aligning foundation models with human values through preference learning. However, manual annotation of such data is often time-consuming and costly. Recent methods often adopt a self-rewarding approach, where the target model generates and annotates its own preference data, but this can lead to inaccuracies since the reward model shares weights with the target model, thereby amplifying inherent biases. To address these issues, we propose Anyprefer, a framework designed to synthesize high-quality preference data for aligning the target model. Anyprefer frames the data synthesis process as a cooperative two-player Markov Game, where the target model and the judge model collaborate together. Here, a series of external tools are introduced to assist the judge model in accurately rewarding the target model’s responses, mitigating biases in the rewarding process. In addition, a feedback mechanism is introduced to optimize prompts for both models, enhancing collaboration and improving data quality. The synthesized data is compiled into a new preference dataset, Anyprefer-V1, consisting of 58K high-quality preference pairs. Extensive experiments show that Anyprefer significantly improves model alignment performance across four main applications, covering 21 datasets, achieving average improvements of 18.55% in five natural language generation datasets, 3.66% in nine vision-language understanding datasets, 30.05% in three medical image analysis datasets, and 16.00% in four visuo-motor control tasks.