Academic research plays such an important role in advancing science, technology, culture, and society. This grant program helps ensure this community has access to the latest and leading AI models.

Brad Smith, Vice Chair and President
dark green icon of a lightbulb with a plant growing inside and a ring around the lightbulb

AFMR Goal: Accelerate scientific discovery in natural sciences

via proactive knowledge discovery, hypothesis generation, and multiscale multimodal data generation

This set of research projects explore how foundation models can be applied in a variety for domain applications in science and engineering, spanning agriculture, battery design, catalyst discovery, climate science, energy systems, health, Internet of Things (IoT), material science, and robotics. The breadth of methodologies explored include contextual understanding and representation, semantic parsing, interaction skills acquisition, dynamic adaptation and efficient retrieval. These efforts demonstrate how advanced AI can enable scientific discoveries to be realized through a range of applications that swiftly integrate foundation models with complementary technologies to drive innovation across many sectors.

  • National University of Singapore: Jingxian Wang (PI)

    Attempts to bridge the gap in foundation models that establish links across multiple types of IoT sensors in varied environments without the constraints of elaborate sensor calibration.

  • École Polytechnique Fédérale de Lausanne: Josie Hughes (PI)

    The project aims to explore the role of foundation models in aiding the design of soft or compliant robots that interact with humans. It proposes the development of a framework for contextual human-robot interactions. The project design involves two work packages, focusing on design generation, contextual predictions for design, and qualitative feedback responses. Project leverages Microsoft Azure tools and AI models including GPT-4 series, Codex and Dall-E 2.

  • The University of Nottingham: Valerio Giuffrida (PI)

    The proposal aims to build foundation models that can be effectively applied to diverse vision-based plant phenotyping and agricultural data and tasks. The focus is on developing pre-training methods like self-supervised learning to leverage labeled and unlabeled crop datasets for agricultural use. This project would involve Azure data storage, computational power, Azure APIs along with open-sourced foundation models and datasets.

  • The University of Hong Kong: Yanchao Yang (PI)

    This research aims to use foundation models to train embodied agents and enable them to acquire diverse physical interaction skills and adapt efficiently to dynamic situations. The training process is autonomous, requiring little to no human annotation effort. Utilizing language and vision-language models, the proposed methodology endows embodied agents with the capabilities to comprehend language instructions, plan tasks, and derive actions for accomplishing the interaction goal. Additionally, the proposal aims to improve foundation models through embodied learning.

    Related papers:

  • Stanford University: Adam Brandt (PI)

    This project aims to deploy AI and large language models (LLMs) to extract data from energy datasets, with a focus on the oil and gas sector. The initiative is expected to help in the creation of a comprehensive database that highlights the broader energy industry and significantly contribute to formulating climate policy across the energy spectrum.

  • University of Michigan, Ann Arbor: Joyce Chai (PI)

    Explore the use of large language models (e.g., GPT-4) as communication facilitators for embodied AI agents in human-agent dialogue and multi-agent communication tasks. Our team will focus on grounding language to the agent’s perception and action, improving semantic representation of the 3D environment through dialogue, and enhancing task planning capabilities using GPT-4 in a simplified grid world.

    Related papers:

  • University of New South Wales: Imran Razzak (PI)

    This research leverages Foundation Models to generate structured knowledge from materials science literature. Goals include enhancement of pre-existing datasets, making data in material science literature more discoverable, interoperable, and reusable, and simplifying the data mining workflow in materials science. The approach includes dataset management and construction, information extraction and inference, and knowledge discovery.

  • University of Michigan, Ann Arbor: Venkat Viswanathan (PI)

    The proposal seeks to employ foundation models, specifically large language models, to accelerate materials-level innovation necessary for the design of next-gen batteries. The research focuses on extensive evaluation of these models on a customized dataset on battery electrolyte design, aiming to enhance property prediction accuracy through foundation models. The ultimate goal is integrating foundation models into an automated materials design workflow, AutoMat, to expedite electrolyte design for advanced batteries.

  • University of Texas at Arlington: William Beksi (PI)

    The ability to reason about cause and effect from observational data is crucial for robust generalization in robotic systems. However, the construction of a causal graph, a mechanism for representing causal relations, presents an immense challenge. Currently, a nuanced grasp of causal inference, coupled with an understanding of causal relationships, must be manually programmed into a causal graphical model. To address this difficulty, we propose an innovative augmented reality framework for creating causal graphical models via large language models during human-robot interaction. Concretely, our system will bootstrap the causal discovery process by utilizing large language models to assist humans in selecting variables, establishing relationships, performing interventions, generating counterfactual explanations, and evaluating the resulting causal graph at every step.

  • Emory University: Eugene Agichtein (PI)

    The proposal aims to improve access to healthcare information by utilizing Large Language Models (LLMs) to close the gap between user queries and specialized medical knowledge. The goal is to enhance query understanding and representation for health-oriented search across dialects and languages. Techniques such as data augmentation, prompt optimization, retrieval augmentation, and pseudo-relevance feedback will be used. The outcome will be a toolkit for robust, multi-lingual health-oriented search models.

  • University of Illinois Urbana-Champaign: Heng Ji (PI)

    The research proposes integration of Generative AI and Computational Chemistry to accelerate the development of biofuels-targeted catalyst discovery. The research team has developed a system, ChemReasoner, that uses large language models for heuristic search in chemical space. The proposal aims to extend this system by integrating density functional theory (DFT) simulations capability using Microsoft’s Azure Quantum Elements. The system will bring together scientific literature-driven symbolic reasoning with atomistic-level structure guided reasoning.

  • University of California, Santa Barbara: Shiyu Chang (PI)

    This proposal introduces a novel framework for detecting and mitigating hallucinations in LLMs by constructing and propagating model beliefs on a “belief tree”. For any statement generated by an LLM, our first goal is to curate the model’s internal knowledge by constructing a tree where each node represents a statement logically related to the parent node, and each edge represents the inferential relationship. We can then assess the hallucination of the root node by propagating the LLMs’ beliefs from the children to parent nodes. Our rationale for constructing the belief tree is straightforward: directly determining the accuracy of a complex statement is challenging, while converting statements into a tree of interconnected propositions creates a clear visual and structural representation of the LLMs’ internal knowledge. The truthfulness of a target statement can then be determined by the LLM’s belief in its “surrounding” statements, their inferential relationship, and the consistency of beliefs among them. Specifically, we aim to answer: Q1) How to construct belief trees to curate the models internal knowledge? Q2) How to propagate the confidence of the LLM in the belief tree for reliable hallucination detection? Q3) How to use the proposed method to update the LLM and reduce hallucinations?

  • Carnegie Mellon University: Katerina Fragkiadaki (PI)

    Use Large Language Models (LLMs) as semantic parsers to map instructions and dialogues to programs over neural perceptual and control routines few-shot, via appropriate prompting. We will maintain and continually update a memory of prompt examples that will be retrieved and composed on-the-fly to prompt LLMs for semantic parsing of human’s instructions, questions, clarifications, and human-agent dialogues. We will further maintain memories of events in natural language that our LLM will access to maintain coherence across long timespans of interactions with the users, but also, to personalize the agent’s behaviour.

    Related papers: