AI Frontiers blog
By Adam Fourney, Principal Researcher; Gagan Bansal, Senior Researcher; Hussein Mozannar, Senior Researcher; Victor Dibia, Principal Research Software Engineer; Saleema Amershi, Partner Research Manager Contributors: Adam Fourney, Gagan Bansal, Hussein Mozannar, Cheng Tan, Eduardo Salinas, Erkang (Eric) Zhu, Friederike Niedtner,…
By Yadong Lu, Senior Researcher; Jianwei Yang, Principal Researcher; Yelong Shen, Principal Research Manager; Ahmed Awadallah, Partner Research Manager Recent advancements in large vision-language models (VLMs), such as GPT-4V and GPT-4o, have demonstrated considerable promise in driving intelligent agent systems…
Direct Nash Optimization: Teaching language models to self-improve with general preferences
This talk discusses teaching language models to self-improve using a preference oracle like GPT-4, framing it as a two-player game to find an optimal policy at a Nash equilibrium, and achieving state-of-the-art win rates against GPT-4 Turbo on benchmarks such…
Adam Fourney discusses the effectiveness of using multiple agents, working together, to complete complex multi-step tasks. He will showcase their capability to outperform previous single-agent solutions on benchmarks like GAIA, utilizing customizable arrangements of agents that collaborate, reason, and utilize…
Besmira Nushi summarizes timely challenges and ongoing work on evaluating and in-depth understanding of large foundation models as well as agent platforms built upon such models at the Microsoft Research Forum.
Dipendra Misra, Senior Researcher at Microsoft Research New York City and AI Frontiers lightning talk presentation at the Microsoft Research Forum.
Panel Discussion: AI Frontiers
Hosted by Ashley Llorens, VP and Distinguished Scientist, Microsoft AI researchers, Sébastien Bubeck, Ahmed Awadallah, and Ece Kamar discuss frontiers in small language models and where AI research and capabilities are headed next.