Understanding Generalization Error of SGD in Nonconvex Optimization
- Yi Zhou ,
- Huishuai Zhang ,
- Yingbin Liang
International Conference on Acoustics, Speech, and Signal Processing (ICASSP) |
The success of deep learning has led to a rising interest in the generalization property of the stochastic gradient descent (SGD) method, and stability is one
popular approach to study it. Existing generalization bounds based on stability do not incorporate the interplay between the optimization of SGD and the underlying data distribution, and hence cannot even capture the effect of randomized labels on the generalization performance. In this paper, we establish generalization error bounds for SGD by characterizing the corresponding stability in terms of the population risk at initialization and the on-average variance of the stochastic gradients. Such characterizations lead to improved bounds on the generalization error of SGD and are empirically consistent with the effect of the random labels on the generalization performance. We also study the regularized risk minimization problem with strongly convex regularizers, and obtain improved generalization error bounds for the proximal SGD.