当前位置: X-MOL 学术arXiv.cs.SY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A priori guarantees of finite-time convergence for Deep Neural Networks
arXiv - CS - Systems and Control Pub Date : 2020-09-16 , DOI: arxiv-2009.07509
Anushree Rankawat, Mansi Rankawat, Harshal B. Oza

In this paper, we perform Lyapunov based analysis of the loss function to derive an a priori upper bound on the settling time of deep neural networks. While previous studies have attempted to understand deep learning using control theory framework, there is limited work on a priori finite time convergence analysis. Drawing from the advances in analysis of finite-time control of non-linear systems, we provide a priori guarantees of finite-time convergence in a deterministic control theoretic setting. We formulate the supervised learning framework as a control problem where weights of the network are control inputs and learning translates into a tracking problem. An analytical formula for finite-time upper bound on settling time is computed a priori under the assumptions of boundedness of input. Finally, we prove the robustness and sensitivity of the loss function against input perturbations.

中文翻译:

深度神经网络有限时间收敛的先验保证

在本文中,我们对损失函数进行基于李雅普诺夫的分析,以推导出深度神经网络稳定时间的先验上限。虽然之前的研究试图使用控制理论框架来理解深度学习,但在先验有限时间收敛分析方面的工作有限。借鉴非线性系统有限时间控制分析的进展,我们提供了确定性控制理论设置中有限时间收敛的先验保证。我们将监督学习框架制定为一个控制问题,其中网络的权重是控制输入,学习转化为跟踪问题。在输入有界的假设下,先验地计算了建立时间的有限时间上限的解析公式。最后,
更新日期:2020-09-17
down
wechat
bug