{"id":995583,"date":"2024-01-05T08:09:50","date_gmt":"2024-01-05T16:09:50","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=995583"},"modified":"2024-05-29T17:51:52","modified_gmt":"2024-05-30T00:51:52","slug":"afmr-domain-applications","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/afmr-domain-applications\/","title":{"rendered":"AFMR: Domain applications"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"white\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\tAccelerating Foundation Models Research\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Domain applications<\/h1>\n\n\n\n

<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

\n

Academic research plays such an important role in advancing science, technology, culture, and society. This grant program helps ensure this community has access to the latest and leading AI models.<\/em><\/strong><\/p>\nBrad Smith, Vice Chair and President<\/cite><\/blockquote>\n\n\n\n

\n
<\/div>\n\n\n\n
\n
\"dark<\/figure>\n\n\n\n

AFMR Goal: Accelerate scientific discovery in natural sciences<\/h2>\n\n\n\n

via proactive knowledge discovery, hypothesis generation, and multiscale multimodal data generation<\/p>\n<\/div>\n\n\n\n

<\/div>\n<\/div>\n\n\n\n
\n\t\n\t
\n\t\t
\n\t\t\t
<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n\n\n\n

This set of research projects explore how foundation models can be applied in a variety for domain applications in science and engineering, spanning agriculture, battery design, catalyst discovery, climate science, energy systems, health, Internet of Things (IoT), material science, and robotics. The breadth of methodologies explored include contextual understanding and representation, semantic parsing, interaction skills acquisition, dynamic adaptation and efficient retrieval. These efforts demonstrate how advanced AI can enable scientific discoveries to be realized through a range of applications that swiftly integrate foundation models with complementary technologies to drive innovation across many sectors.<\/p>\n\n\n\n

<\/div>\n\n\n\n\n\n

National University of Singapore<\/strong>: Jingxian Wang (PI)<\/p>\n\n\n\n

Attempts to bridge the gap in foundation models that establish links across multiple types of IoT sensors in varied environments without the constraints of elaborate sensor calibration.<\/p>\n\n\n\n\n\n

\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne<\/strong>: Josie Hughes (PI) <\/p>\n\n\n\n

The project aims to explore the role of foundation models in aiding the design of soft or compliant robots that interact with humans. It proposes the development of a framework for contextual human-robot interactions. The project design involves two work packages, focusing on design generation, contextual predictions for design, and qualitative feedback responses. Project leverages Microsoft Azure tools and AI models including GPT-4 series, Codex and Dall-E 2.<\/p>\n\n\n\n\n\n

The University of Nottingham<\/strong>: Valerio Giuffrida (PI)<\/p>\n\n\n\n

The proposal aims to build foundation models that can be effectively applied to diverse vision-based plant phenotyping and agricultural data and tasks. The focus is on developing pre-training methods like self-supervised learning to leverage labeled and unlabeled crop datasets for agricultural use. This project would involve Azure data storage, computational power, Azure APIs along with open-sourced foundation models and datasets.<\/p>\n\n\n\n\n\n

The University of Hong Kong<\/strong>: Yanchao Yang (PI)<\/p>\n\n\n\n

This research aims to use foundation models to train embodied agents and enable them to acquire diverse physical interaction skills and adapt efficiently to dynamic situations. The training process is autonomous, requiring little to no human annotation effort. Utilizing language and vision-language models, the proposed methodology endows embodied agents with the capabilities to comprehend language instructions, plan tasks, and derive actions for accomplishing the interaction goal. Additionally, the proposal aims to improve foundation models through embodied learning.<\/p>\n\n\n\n\n\n

Stanford University<\/strong>: Adam Brandt (PI)<\/p>\n\n\n\n

This project aims to deploy AI and large language models (LLMs) to extract data from energy datasets, with a focus on the oil and gas sector. The initiative is expected to help in the creation of a comprehensive database that highlights the broader energy industry and significantly contribute to formulating climate policy across the energy spectrum.<\/p>\n\n\n\n\n\n

University of Michigan, Ann Arbor<\/strong>: Joyce Chai (PI)<\/p>\n\n\n\n

Explore the use of large language models (e.g., GPT-4) as communication facilitators for embodied AI agents in human-agent dialogue and multi-agent communication tasks. Our team will focus on grounding language to the agent\u2019s perception and action, improving semantic representation of the 3D environment through dialogue, and enhancing task planning capabilities using GPT-4 in a simplified grid world.<\/p>\n\n\n\n

Related paper:<\/strong><\/p>\n\n\n\n