The Knowledge and Language Team is part of the Azure Cognitive Services Research (CSR) group, focusing on cutting edge research and the development of the next generation framework for knowledge and natural language processing.

We are working on: 1) Knowledge-enhanced Language Model, 2) Summarization, 3) Few-shot and Prompt Learning, and 4) Multimodal Learning. We develop state-of-the-art deep learning technologies for both research and business applications.

Our work has resulted in multiple publications in top NLP conferences and achieving human parity in HellaSwag, CommonsenseQA and CoQA, 1st places in CommonGen, FEVER, ARC and SQuAD v1.0 leaderboard.

Our recent work covers:

Updates

May 5, 2023: Chenguang Zhu and Prof. Diyi Yang from Stanford University gave the tutorial on Summarization of Dialogues and Conversations At Scale (opens in new tab) at EACL 2023.

May 1, 2023: 5 papers accepted at ACL 2023.

Apr. 25 2023: Felipe Vieira Frujeri’s paper «DOTE: Rethinking (Predictive) WAN Traffic Engineering» got Best Paper Award at NSDI 2023.

Apr. 24, 2023: 2 papers accepted at ICML 2023.

Feb. 28, 2023: 2 papers accepted at CVPR 2023.

Feb. 27, 2023: Chenguang Zhu gave the talk «How We Achieved Human Parity in CommonsenseQA – Fusing Knowledge into Language Models» at Singapore Management University. [ Slides (opens in new tab) ]

Feb. 27, 2023: We gave the tutorial on Knowledge-Augmented Methods for Natural Language Processing (opens in new tab) at WSDM 2023.

Feb. 23, 2023: 1 paper accepted at TACL.

Feb. 13, 2023: We organized The Workshop on Knowledge Augmented Methods for NLP (KnowledgeNLP-AAAI’23) (opens in new tab) at AAAI 2023.

Jan. 20, 2023: 2 papers accepted at ICLR 2023.

Nov. 18, 2022: 1 paper accepted at AAAI 2023.

Oct. 6, 2022: 11 papers accepted at EMNLP 2022.