• Open Access

Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization

Qianyi Li and Haim Sompolinsky
Phys. Rev. X 11, 031059 – Published 16 September 2021
PDFHTMLExport Citation

Abstract

The groundbreaking success of deep learning in many real-world tasks has triggered an intense effort to theoretically understand the power and limitations of deep learning in the training and generalization of complex tasks, so far with limited progress. In this work, we study the statistical mechanics of learning in deep linear neural networks (DLNNs) in which the input-output function of an individual unit is linear. Despite the linearity of the units, learning in DLNNs is highly nonlinear; hence, studying its properties reveals some of the essential features of nonlinear deep neural networks (DNNs). Importantly, we exactly solve the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space. To do this, we introduce the backpropagating kernel renormalization (BPKR), which allows for the incremental integration of the network weights layer by layer starting from the network output layer and progressing backward until the first layer’s weights are integrated out. This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity. BPKR does not assume specific statistics of the input or the task’s output. Furthermore, by performing partial integration of the layers, the BPKR allows us to compute the emergent properties of the neural representations across the different hidden layers. We propose a heuristic extension of the BPKR to nonlinear DNNs with rectified linear units (ReLU). Surprisingly, our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks of modest depth, in a wide regime of parameters. Our work is the first exact statistical mechanical study of learning in a family of deep neural networks, and the first successful theory of learning through the successive integration of degrees of freedom in the learned weight space.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
8 More
  • Received 6 December 2020
  • Revised 27 May 2021
  • Accepted 9 July 2021

DOI:https://doi.org/10.1103/PhysRevX.11.031059

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

Physics of Living SystemsNetworksStatistical Physics & ThermodynamicsInterdisciplinary Physics

Authors & Affiliations

Qianyi Li1,2 and Haim Sompolinsky2,3,4

  • 1The Biophysics Program, Harvard University, Cambridge, Massachusetts 02138, USA
  • 2Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
  • 3Racah Institute of Physics, Hebrew University, Jerusalem 91904, Israel
  • 4Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem 91904, Israel

Popular Summary

Deep learning—a class of machine-learning algorithms—has achieved unprecedented success in many real-world challenges, such as machine vision, speech recognition, and natural language processing. And yet, many theoretical problems are unsolved, such as the remarkable generalization ability of deep neural networks despite being highly overparametrized. Here, we develop a novel theory of deep neural networks, developing it rigorously for the simpler case of linear networks, in which the input-output function of an individual unit is linear, and extending it to those that are nonlinear.

In a deep neural network, raw data (such as an image) is fed to a layer of neuronlike nodes, which connect to a second layer, then a third layer, and so on. From the final output layer emerges the result of the network’s analysis (e.g., this is an image of a dog). Crucial to the network’s performance are weights that encode the relative influence of nodes in one layer on the nodes of the successive layer.

Our theory constructs a new analytical method, the backpropagating kernel renormalization, that provides an exact and comprehensive account of the statistical properties of the “weight space” of trained deep linear neural networks. This allows us to evaluate the features that determine the network’s ability to generalize well despite overparametrization, including the network architectures, training data size, weight regularization, and learning stochasticity.

Our theory is independent of training data statistics, allowing applicability in diverse realistic problems. We propose a heuristic extension from linear to nonlinear deep neural networks, which predicts well the behavior of certain nonlinear networks of modest depth, indicating potential applications and extensions of our theory.

Key Image

Article Text

Click to Expand

Supplemental Material

Click to Expand

References

Click to Expand
Issue

Vol. 11, Iss. 3 — July - September 2021

Subject Areas
Reuse & Permissions
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review X

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

×

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×