{"id":793547,"date":"2021-11-16T08:00:59","date_gmt":"2021-11-16T16:00:59","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=793547"},"modified":"2021-11-16T10:48:40","modified_gmt":"2021-11-16T18:48:40","slug":"keynote-redunet-deep-convolutional-networks-from-the-principle-of-rate-reduction","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/keynote-redunet-deep-convolutional-networks-from-the-principle-of-rate-reduction\/","title":{"rendered":"Keynote: ReduNet: Deep (convolutional) networks from the principle of rate reduction"},"content":{"rendered":"

In this talk, we will offer an entirely white-box interpretation of deep (convolutional) networks from the perspective of data compression and group invariance. We\u2019ll show how modern deep-layered architectures, linear (convolutional) operators and nonlinear activations, and even all parameters can be derived from the principle of maximizing rate reduction with group invariance. We\u2019ll cover how all layers, operators, and parameters of the network are explicitly constructed through forward propagation rather than learned through back propagation. We\u2019ll also explain how all components of the so-obtained network, called ReduNet, have precise optimization, geometric, and statistical interpretation. You\u2019ll learn how this principled approach reveals a fundamental tradeoff between invariance and sparsity for class separability; how it reveals a fundamental connection between deep networks and Fourier transform for group invariance\u2014the computational advantage in the spectral domain; and how it clarifies the mathematical role of forward and backward propagation. Finally, you\u2019ll discover how the so-obtained ReduNet is amenable to fine-tuning through both forward and backward propagation to optimize the same objective.<\/p>\n

Related resources: <\/strong><\/p>\n

Deep (Convolution) Networks from First Principles<\/p>\n