{"id":1162537,"date":"2026-03-03T10:05:09","date_gmt":"2026-03-03T18:05:09","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-video&p=1162537"},"modified":"2026-03-03T10:05:10","modified_gmt":"2026-03-03T18:05:10","slug":"magentic-marketplace-testing-societies-of-agents-at-scale","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/magentic-marketplace-testing-societies-of-agents-at-scale\/","title":{"rendered":"Magentic Marketplace: Testing societies of agents at scale"},"content":{"rendered":"\n
\n

As AI agents move from isolated tools to active participants in multi-agent ecosystems, their success depends on more than task competence\u2014it requires strategic behavior under misaligned incentives and imperfect information. Using Magentic Marketplace, an open-source simulation of two-sided agent markets, we show that while frontier models can achieve strong welfare outcomes in ideal settings, performance degrades at scale and reveals emergent failure modes such as manipulation and speed bias, motivating a shift toward training agents for social reasoning.<\/p>\n\n\n\n

Explore more<\/h2>\n\n\n\n