Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
In this roundtable discussion, we will discuss the current state-of-the-art in language models (LMs) and how the lack of external and commonsense knowledge is a limiting factor in several applications, including open domain question answering. This additional information or knowledge could come from structured databases, knowledge graphs, images or other modalities, and we will cover some novel approaches for incorporating this data. Our discussion will cover modeling, but also the available datasets that can help to drive advances in this domain.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
- Track:
- Deep Learning & Large-Scale AI
- Date:
- Speakers:
- Yonatan Bisk, Daniel McDuff, Dragomir Radev
- Affiliation:
- Carnegie Mellon University, Microsoft Research Redmond, Yale University
-
-
Yonatan Bisk
Professor
CMU
-
Daniel McDuff
Principal Researcher
-
Dragomir Radev
Professor
Yale University
-
-
Deep Learning & Large-Scale AI
-
-
-
Research talk: Resource-efficient learning for large pretrained models
Speakers:- Subhabrata (Subho) Mukherjee
-
-
-
Research talk: Prompt tuning: What works and what's next
Speakers:- Danqi Chen
-
-
-
Research talk: NUWA: Neural visual world creation with multimodal pretraining
Speakers:- Lei Ji,
- Chenfei Wu
-
-
-
-
Research talk: Towards Self-Learning End-to-end Dialog Systems
Speakers:- Baolin Peng
-
Research talk: WebQA: Multihop and multimodal
Speakers:- Yonatan Bisk
-
Research talk: Closing the loop in natural language interfaces to relational databases
Speakers:- Dragomir Radev
-
Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
Speakers:- Yonatan Bisk,
- Daniel McDuff,
- Dragomir Radev
-