{"id":1155383,"date":"2025-12-04T04:12:09","date_gmt":"2025-12-04T12:12:09","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=1155383"},"modified":"2025-12-18T01:47:51","modified_gmt":"2025-12-18T09:47:51","slug":"quantifying-and-mitigating-emerging-risks-in-multi-agent-collaboration","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/quantifying-and-mitigating-emerging-risks-in-multi-agent-collaboration\/","title":{"rendered":"Quantifying and Mitigating Emerging Risks in Multi-Agent Collaboration"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"background\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Quantifying and Mitigating Emerging Risks in Multi-Agent Collaboration<\/h1>\n\n\n\n

<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

This project investigates critical safety challenges in large-scale deployments of AI agents, focusing on privacy leakage and collusion risks in multi-agent environments. As agents collaborate and negotiate across complex tasks, they may unintentionally expose sensitive information or coordinate in ways that misalign with human values. The research develops a simulation testbed to analyse these behaviours, introduces dynamic privacy protocols, and explores how scaling agent interactions amplifies risk. Outcomes include a taxonomy of collusion patterns, mitigation strategies, and design principles for safer, transparent, and trustworthy multi-agent systems\u2014informing future AI safety standards and governance.<\/p>\n\n\n\n

This research is conducted via The Agentic AI Research and Innovation <\/a>(AARI) Initiative which focuses on the next frontier of agentic systems through Grand Challenges<\/em> with the academic community and Microsoft Research.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"

This project investigates critical safety challenges in large-scale deployments of AI agents, focusing on privacy leakage and collusion risks in multi-agent environments. As agents collaborate and negotiate across complex tasks, they may unintentionally expose sensitive information or coordinate in ways that misalign with human values. The research develops a simulation testbed to analyse these behaviours, […]<\/p>\n","protected":false},"featured_media":1155712,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-1155383","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Jianxun Lian","user_id":38470,"people_section":"Section name 0","alias":"jialia"},{"type":"user_nicename","display_name":"Beibei Shi","user_id":42162,"people_section":"Section name 0","alias":"besh"},{"type":"guest","display_name":"Yule Wen","user_id":1159033,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Xing Xie","user_id":34906,"people_section":"Section name 0","alias":"xingx"},{"type":"guest","display_name":"Diyi Yang","user_id":1157402,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Xiaoyuan Yi","user_id":40768,"people_section":"Section name 0","alias":"xiaoyuanyi"},{"type":"guest","display_name":"Yanzhe Zhang","user_id":1158807,"people_section":"Section name 0","alias":""}],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1155383","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":6,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1155383\/revisions"}],"predecessor-version":[{"id":1158808,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1155383\/revisions\/1158808"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1155712"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1155383"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1155383"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1155383"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1155383"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1155383"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}