Podcasts

  1. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 12, 2023 

    December 12, 2023 | Gretchen Huizinga, Tao Qin, and Lijun Wu

    Members of the research community at Microsoft work continuously to advance their respective fields. Abstracts brings its audience to the cutting edge with them through short, compelling conversations about new and noteworthy achievements.  In this episode, Senior Principal Research Manager Tao Qin and Senior Researcher…

  2. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 11, 2023 

    December 11, 2023 | Gretchen Huizinga and Alessandro Sordoni

    By treating language models as layers in a network and prompts as learnable parameters, researchers aim for more adaptable, reusable LLM architectures. Check out the work in the “Abstracts” podcast series with guest Alessandro Sordoni and at #NeurIPS2023:

  3. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 6, 2023 

    December 6, 2023 | Gretchen Huizinga and Xing Xie

    "Abstracts”—your source for world-class research in brief—welcomes Senior Principal Research Manager Xing Xie to the podcast series to discuss his paper on evaluating general-purpose AI with psychometrics.

  4. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: October 23, 2023 

    October 23, 2023 | Gretchen Huizinga, Andy Gordon, and Carina Negreanu

    Today on “Abstracts,” Partner Research Manager Andy Gordon & Senior Researcher Carina Negreanu explore new work introducing co-audit, a term for any tool-assisted experience that helps users of generative AI find and fix mistakes in AI output.

  5. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: October 9, 2023 

    October 9, 2023 | Gretchen Huizinga and Sheng Zhang

    Researcher Dr. Sheng Zhang joins “Abstracts”—your source for cutting-edge research in brief—to discuss a recent paper on distilling large language models into smaller, more efficient ones capable of excelling in broad application classes.