Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

NeurIPS 2023 |

Publication

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through knowledge distillation. We address the theory that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained large-scale models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard adversarial training techniques. We provide a theoretical framework for the use of a robust teacher in the knowledge distillation with data augmentation settings and demonstrate strong gains in out-of-distribution robustness and clean accuracy across different student architectures. Notably, our method adds negligible computational overhead compared to standard adversarial training and can be easily combined with other regularization techniques for improved performance.