Large machine learning model can memorize the training data, which poses privacy risk. To preserve privacy, it requires to control the data access and measure the privacy loss. Differential privacy (DP) is widely recognized as a gold standard of privacy protection due to its mathematical rigor. We propose a series of approaches to solve the challenges of applying DP in large deep neural networks and achieve new state-of-the-art results for private learning.