{"id":788945,"date":"2021-10-26T23:54:04","date_gmt":"2021-10-27T06:54:04","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=788945"},"modified":"2021-11-25T19:01:25","modified_gmt":"2021-11-26T03:01:25","slug":"privacy-preserving-deep-learning","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/privacy-preserving-deep-learning\/","title":{"rendered":"Privacy-preserving Deep Learning"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Privacy-preserving Deep Learning<\/h1>\n\n\n\n

<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

Large machine learning model can memorize the training data, which poses privacy risk. To preserve privacy,  it requires to control the data access and measure the privacy loss. Differential privacy (DP) is widely recognized as a gold standard of privacy protection due to its mathematical rigor. We propose a series of approaches to solve the challenges of applying DP in large deep neural networks and achieve new state-of-the-art results for private learning.<\/p>\n\n\n\n\n\n