当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning
Neural Computation ( IF 2.9 ) Pub Date : 2019-12-01 , DOI: 10.1162/neco_a_01234
Kenji Kawaguchi 1 , Jiaoyang Huang 2 , Leslie Pack Kaelbling 1
Affiliation  

For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning.

中文翻译:

每个局部最小值是非凸机器学习中诱导模型的全局最小值

对于机器学习中的非凸优化,本文证明了每个局部最小值在任何可微点处都达到了可扰梯度基模型的全局最优值。因此,就可微局部最小值处的损失而言,非凸机器学习在理论上与手工制作的凸机器学习一样得到支持,除非在可微扰梯度基础上优先使用手工制作的基础的情况。这些结果的证明是在温和的假设下得出的。因此,经过验证的结果直接适用于许多机器学习模型,包括实际的深度神经网络,而无需对实际方法进行任何修改。此外,作为我们一般结果的特例,本文使用统一的证明技术和新颖的几何见解改进或补充了有关深度神经网络、深度残差网络和超参数化深度神经网络的多项最新理论成果。我们结果的一个特例也有助于表示学习的理论基础。
更新日期:2019-12-01
down
wechat
bug