当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the investigation of activation functions in gradient neural network for online solving linear matrix equation
Neurocomputing ( IF 5.5 ) Pub Date : 2020-11-01 , DOI: 10.1016/j.neucom.2020.06.097
Zhiguo Tan , Yueming Hu , Ke Chen

Abstract In this paper, we investigate different activation functions (AFs) on convergence performance of a gradient-based neural network (GNN) for solving linear matrix equation, AXB + X = C . It is observed that, by employing different AFs, i.e., linear, power-sigmoid, sign-power, and general sign-bi-power functions, the presented GNN model can achieve different convergence performance. More specifically, if linear function is employed, the GNN model can achieve exponential convergence; if the power-sigmoid function is employed, superior convergence can be achieved as compared to the linear case; while if the sign-power and general sign-bi-power functions are employed, the GNN model can achieve finite- and fixed-time convergence, respectively. Detailed theoretical proofs are offered to demonstrate these facts. Besides, the exponential convergence rate and the upper bounds of finite and fixed convergence time are also theoretically estimated. Finally, two illustrative examples are performed to further substantiate the aforementioned theoretical results and the effectiveness of the presented GNN model for solving the linear matrix equation.

中文翻译:

在线求解线性矩阵方程的梯度神经网络激活函数研究

摘要 在本文中,我们研究了基于梯度的神经网络 (GNN) 求解线性矩阵方程 AXB + X = C 的收敛性能的不同激活函数 (AF)。可以看出,通过采用不同的 AF,即线性函数、sigmoid 函数、符号幂函数和一般符号双幂函数,所提出的 GNN 模型可以实现不同的收敛性能。更具体地说,如果采用线性函数,GNN 模型可以实现指数收敛;如果采用 power-sigmoid 函数,与线性情况相比,可以实现更好的收敛性;而如果采用符号幂函数和一般符号双幂函数,则 GNN 模型可以分别实现有限时间和固定时间收敛。提供了详细的理论证明来证明这些事实。除了,指数收敛速度和有限和固定收敛时间的上限也被理论上估计。最后,执行两个说明性示例以进一步证实上述理论结果和所提出的 GNN 模型求解线性矩阵方程的有效性。
更新日期:2020-11-01
down
wechat
bug