Communication Efficient Secure and Private Multi-Party Deep Learning
- Sankha Das ,
- Sayak Ray Chowdhuri ,
- Nishanth Chandran ,
- Divya Gupta ,
- Satya Lokam ,
- Rahul Sharma
Distributed training that enables multiple parties to jointly train a model on their respective datasets is a promising approach to address the challenges of large volumes of diverse data for training modern machine learning models. However, this approach immediately raises security and privacy concerns; both about each party wishing to protect its data from other parties during training and preventing leakage of private information from the model after training through various inference attacks. In this paper, we address both these concerns simultaneously by designing efficient Differentially Private, secure Multiparty Computation (DP-MPC) protocols for jointly training a model on data distributed among multiple parties. Our DP-MPC protocol in the two-party setting is 56-794$\times$ more communication-efficient and 16-182$\times$ faster than previous such protocols. Conceptually, our work simplifies and improves on previous attempts to combine techniques from secure multiparty computation and differential privacy, especially in the context of ML training.