{"id":791894,"date":"2021-11-16T08:00:14","date_gmt":"2021-11-16T16:00:14","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=791894"},"modified":"2021-11-02T13:12:22","modified_gmt":"2021-11-02T20:12:22","slug":"research-talk-privacy-in-machine-learning-research-at-microsoft","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/research-talk-privacy-in-machine-learning-research-at-microsoft\/","title":{"rendered":"Research talk: Privacy in machine learning research at Microsoft"},"content":{"rendered":"
Speaker: Melissa Chase, Principal Researcher, Microsoft Research Redmond<\/p>\n
Training modern machine learning models requires large amounts of data, and often that data may be private or confidential. The area of privacy-preserving machine learning looks at to what extent this private data may be exposed in the resulting model, and how this leakage can be reduced or prevented. This talk will first introduce the area of privacy-preserving machine learning, then give an overview of how we have been thinking about this problem at Microsoft Research. It will briefly summarize some of the work that we have been doing on different aspects of this problem, and then do a deeper discussion of one project that considers to what extent text models store recognizable information about users in the training data. Specifically, we will describe a new black box membership inference attack which works on models that include a word embedding layer, and takes advantage of the inherent structure in word embeddings.<\/p>\n