{"id":788159,"date":"2023-09-25T21:53:00","date_gmt":"2023-09-26T04:53:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=788159"},"modified":"2024-02-28T07:03:22","modified_gmt":"2024-02-28T15:03:22","slug":"agent-ai","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/agent-ai\/","title":{"rendered":"Agent AI"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Agent AI<\/h1>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

Agent-based multimodal AI systems are becoming a ubiquitous presence in our everyday lives. A promising direction for making these systems more interactive is to embody them as agents within specific environments. The grounding of large foundation models to act as agents within specific environments can provide a way of incorporating visual and contextual information into an embodied system. For example, a system that can perceive user actions, human behavior, environment objects, audio expressions, and the collective sentiment of a scene can be used to inform and direct agent responses within the given environment. Emergent Agent AI as an interactive system that can perceive visual stimuli, language inputs, or other environmentally-grounded data and can produce meaningful actions, manipulation, navigation, gesture, etc. In particular, we focus on improving upon agents based on next action predication by incorporating external knowledge, multimodality, and human feedback obtained by the interactive agent. We argue that by developing agentic AI systems in grounded environments, we will also minimize the hallucinations of large foundation models and their ability to generate incorrect outputs. To accelerate research on embodied agent intelligence, we propose a new project on General Embodied Agent AI<\/strong>, which focuses on the broader embodied and agentic aspects of multimodal interactions.<\/p>\n\n\n\n

\"Agent\"<\/figure>\n\n\n\n

<\/p>\n\n\n\n

The related papers are published: <\/p>\n\n\n\n

1) Agent AI Towards a Holistic Intelligence<\/a><\/p>\n\n\n\n

2) Agent foundation model (opens in new tab)<\/span><\/a> for embodied interaction in Robot, Gaming, and Healthcare; <\/p>\n\n\n\n

3) Multi-agent for Gaming (opens in new tab)<\/span><\/a> (GPT-4) in simulation and real infrastructure;<\/p>\n\n\n\n

4) Navigation Agent for Robotics;<\/a><\/p>\n\n\n\n

5) GPT4V agent for Robotics (opens in new tab)<\/span><\/a> <\/p>\n\n\n\n

6) Agent AI (opens in new tab)<\/span><\/a> Survey and GPT 4V for Robotics, Gaming, and Healthcare.<\/p>\n\n\n\n

Community building: <\/em><\/strong><\/p>\n\n\n\n

In addition, we will organize CVPR2024 Tutorial on Generalist Agent AI (opens in new tab)<\/span><\/a>, and will release two embodied new datasets – CuisineWorld<\/em> and VideoAnalytica<\/em> – with a set of baseline models, encouraging researchers across the world to develop new models and systems, and explore ways to evaluate and improve upon performance in our agent-based multimodal leaderboard.<\/p>\n\n\n\n

To push the frontier of this important area, this project aims at bringing researchers and practitioners in the relevant embodied agent AI together, to share ideas and insights. This is an emerging research area that poses new challenges for embodied AI systems and there is still significant room for improvement. A deeper understanding between audio, vision and language has also started to play a key role in human-machine interaction systems. Our project will greatly advance large foundation model technologies including cross-modality understanding and agnostic reality integration, generic agent information and human-aesthetic evaluation.<\/p>\n\n\n\n

\"Agent
Agent AI is emerging as a promising route for early progress on the path to Artificial General Intelligence (AGI). The Agent AI training process has been shown to demonstrate an ability for multi-modal understanding in the physical world, and provides a framework for reality-agnostic training by leveraging generative AI alongside multiple independent sources of data. Large foundation models trained for agent and action-related tasks can be applied to physical and virtual\/simulated worlds when trained with cross-reality training data. We present the general overview of an Agent AI system that can perceive and act in many different domains and applications, possibly serving as a route towards AGI using an agent paradigm.<\/em>
2401.03568.pdf (arxiv.org) (opens in new tab)<\/span><\/a><\/figcaption><\/figure>\n\n\n","protected":false},"excerpt":{"rendered":"

Agent-based multimodal AI systems are becoming a ubiquitous presence in our everyday lives. A promising direction for making these systems more interactive is to embody them as agents within specific environments. The grounding of large foundation models to act as agents within specific environments can provide a way of incorporating visual and contextual information into […]<\/p>\n","protected":false},"featured_media":993330,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,13562,13551,13545,13554,13553],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-788159","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-graphics-and-multimedia","msr-research-area-human-language-technologies","msr-research-area-human-computer-interaction","msr-research-area-medical-health-genomics","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2023-07-25","related-publications":[681282,975606,998313,788249,1005141,810415,1007064,846181,490436,506933,936855,553113,940530,940548,673644,966927],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Naoki Wake","user_id":39916,"people_section":"Section name 0","alias":"nawake"},{"type":"guest","display_name":"Hoi Vo","user_id":981312,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Katsushi Ikeuchi","user_id":32500,"people_section":"Section name 0","alias":"katsuike"},{"type":"guest","display_name":"Yusuke Noda","user_id":969939,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Daniel McDuff","user_id":860436,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"John Langford","user_id":32204,"people_section":"Section name 0","alias":"jcl"},{"type":"guest","display_name":"Demetri Terzopoulos","user_id":981291,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Yejin Choi","user_id":474327,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Fei-Fei Li","user_id":981306,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Jianfeng Gao","user_id":32246,"people_section":"Section name 0","alias":"jfgao"}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/788159"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":105,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/788159\/revisions"}],"predecessor-version":[{"id":1029411,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/788159\/revisions\/1029411"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/993330"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=788159"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=788159"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=788159"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=788159"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=788159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}