当前位置: X-MOL 学术SIAM J. Imaging Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Nonlinear Power Method for Computing Eigenvectors of Proximal Operators and Neural Networks
SIAM Journal on Imaging Sciences ( IF 2.1 ) Pub Date : 2021-08-04 , DOI: 10.1137/20m1384154
Leon Bungert , Ester Hait-Fraenkel , Nicolas Papadakis , Guy Gilboa

SIAM Journal on Imaging Sciences, Volume 14, Issue 3, Page 1114-1148, January 2021.
Neural networks have revolutionized the field of data science, yielding remarkable solutions in a data-driven manner. For instance, in the field of mathematical imaging, they have surpassed traditional methods based on convex regularization. However, a fundamental theory supporting the practical applications is still in the early stages of development. We take a fresh look at neural networks and examine them via nonlinear eigenvalue analysis. The field of nonlinear spectral theory is still emerging, providing insights about nonlinear operators and systems. In this paper we view a neural network as a complex nonlinear operator and attempt to find its nonlinear eigenvectors. We first discuss the existence of such eigenvectors and analyze the kernel of ReLU networks. Then we study a nonlinear power method for generic nonlinear operators. For proximal operators associated to absolutely one-homogeneous convex regularization functionals, we can prove convergence of the method to an eigenvector of the proximal operator. This motivates us to apply a nonlinear method to networks which are trained to act similarly as a proximal operator. In order to take the nonhomogeneity of neural networks into account we define a modified version of the power method. We perform extensive experiments for different proximal operators and on various shallow and deep neural networks designed for image denoising. Proximal eigenvectors will be used for geometric analysis of graphs, as clustering or the computation of distance functions. For simple neural nets, we observe the influence of training data on the eigenvectors. For state-of-the-art denoising networks, we show that eigenvectors can be interpreted as (un)stable modes of the network, when contaminated with noise or other degradations.


中文翻译:

计算近端算子和神经网络特征向量的非线性幂法

SIAM 成像科学杂志,第 14 卷,第 3 期,第 1114-1148 页,2021 年 1 月。
神经网络彻底改变了数据科学领域,以数据驱动的方式产生了非凡的解决方案。例如,在数学成像领域,它们已经超越了基于凸正则化的传统方法。然而,支持实际应用的基础理论仍处于发展的早期阶段。我们重新审视神经网络,并通过非线性特征值分析来检查它们。非线性谱理论领域仍在不断涌现,提供有关非线性算子和系统的见解。在本文中,我们将神经网络视为一个复杂的非线性算子,并试图找到它的非线性特征向量。我们首先讨论这种特征向量的存在并分析 ReLU 网络的内核。然后我们研究了通用非线性算子的非线性幂方法。对于与绝对同质凸正则化泛函相关的近端算子,我们可以证明该方法收敛到近端算子的特征向量。这促使我们将非线性方法应用于经过训练以类似于近端算子的网络。为了考虑神经网络的非同质性,我们定义了幂方法的修改版本。我们对不同的近端算子以及为图像去噪而设计的各种浅层和深层神经网络进行了大量实验。近端特征向量将用于图形的几何分析,如聚类或距离函数的计算。对于简单的神经网络,我们观察训练数据对特征向量的影响。对于最先进的去噪网络,
更新日期:2021-08-05
down
wechat
bug