当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction
Neural Computation ( IF 2.9 ) Pub Date : 2021-01-29 , DOI: 10.1162/neco_a_01360
Nicholas M Boffi 1 , Jean-Jacques E Slotine 2
Affiliation  

Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors—out of the many that will achieve perfect tracking or prediction—for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian to define adaptation laws with momentum applicable to linearly parameterized systems and nonlinearly parameterized systems satisfying monotonicity or convexity requirements. We show that the Euler Lagrange equations for the Bregman Lagrangian lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.



中文翻译:

非线性参数化自适应控制和预测中的隐式正则化和动量算法

动态系统的稳定并发学习和控制是自适应控制的主题。尽管是一个具有许多实际应用和丰富理论的成熟领域,但非线性系统自适应控制的大部分发展都围绕着一些关键算法。通过利用经典自适应非线性控制技术与优化和机器学习的最新进展之间的紧密联系,我们表明在自适应非线性控制和自适应动态预测的算法开发中存在相当大的未开发潜力。我们首先介绍受自然梯度下降和镜像下降启发的一阶适应定律。我们证明,当存在与数据一致的多个动态时,这些非欧适应定律隐含地正则化了学习模型。因此,在学习期间强加的局部几何可以用于选择参数向量——从将实现完美跟踪或预测的许多参数向量中——用于所需的属性,例如稀疏性。我们将此结果应用于正则化动力学预测器和观察器设计,作为具体示例,我们考虑哈密顿系统、拉格朗日系统和循环神经网络。我们随后开发了一种基于 Bregman Lagrangian 的变分形式来定义具有适用于线性参数化系统和满足单调性或凸性要求的非线性参数化系统的动量的适应律。我们证明了布雷格曼拉格朗日方程的欧拉拉格朗日方程导致自然梯度和具有动量的镜像下降适应定律,并且我们在无限摩擦极限中恢复了它们的一阶类似物。

更新日期:2021-01-31
down
wechat
bug