当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Resonant Machine Learning Based on Complex Growth Transform Dynamical Systems.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-05-18 , DOI: 10.1109/tnnls.2020.2984267
Oindrila Chatterjee , Shantanu Chakrabartty

Traditional energy-based learning models associate a single energy metric to each configuration of variables involved in the underlying optimization process. Such models associate the lowest energy state with the optimal configuration of variables under consideration and are thus inherently dissipative. In this article, we propose an energy-efficient learning framework that exploits structural and functional similarities between a machine-learning network and a general electrical network satisfying Tellegen's theorem. In contrast to the standard energy-based models, the proposed formulation associates two energy components, namely, active and reactive energy with the network. The formulation ensures that the network's active power is dissipated only during the process of learning, whereas the reactive power is maintained to be zero at all times. As a result, in steady state, the learned parameters are stored and self-sustained by electrical resonance determined by the network's nodal inductances and capacitances. Based on this approach, this article introduces three novel concepts: 1) a learning framework where the network's active-power dissipation is used as a regularization for a learning objective function that is subjected to zero total reactive-power constraint; 2) a dynamical system based on complex-domain, continuous-time growth transforms that optimizes the learning objective function and drives the network toward electrical resonance under steady-state operation; and 3) an annealing procedure that controls the tradeoff between active-power dissipation and the speed of convergence. As a representative example, we show how the proposed framework can be used for designing resonant support vector machines (SVMs), where the support vectors correspond to an LC network with self-sustained oscillations. We also show that this resonant network dissipates less active power compared with its non-resonant counterpart.

中文翻译:

基于复杂增长变换动力系统的共振机器学习。

传统的基于能量的学习模型将单个能量度量与基础优化过程中涉及的变量的每个配置相关联。这样的模型将最低能量状态与所考虑的变量的最佳配置相关联,因此具有固有的耗散性。在本文中,我们提出了一种节能的学习框架,该框架利用了机器学习网络和满足Tellegen定理的通用电气网络之间的结构和功能相似性。与标准的基于能量的模型相比,所提出的公式将两个能量分量(即有功和无功能量)与网络相关联。该公式确保仅在学习过程中耗散网络的有功功率,而无功功率始终保持为零。结果,在稳定状态下,所学习的参数将通过网络的节点电感和电容确定的谐振进行存储和自我维持。基于这种方法,本文介绍了三个新颖的概念:1)一种学习框架,其中网络的有功功率被用作学习目标函数的正则化,而该目标函数的总无功功率为零。2)基于复杂域,连续时间增长变换的动力学系统,该系统优化了学习目标函数,并在稳态操作下驱动网络趋于电谐振;3)退火过程,控制有功功率耗散和收敛速度之间的折衷。作为一个有代表性的例子,我们展示了所提出的框架如何用于设计共振支持向量机(SVM),其中支持向量对应于具有自持振荡的LC网络。我们还表明,与非谐振网络相比,该谐振网络耗散了较少的有功功率。
更新日期:2020-05-18
down
wechat
bug