当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The q-Gauss-Newton method for unconstrained nonlinear optimization
arXiv - CS - Computational Complexity Pub Date : 2021-05-27 , DOI: arxiv-2105.12994
Danijela Protic, Miomir Stankovic

A q-Gauss-Newton algorithm is an iterative procedure that solves nonlinear unconstrained optimization problems based on minimization of the sum squared errors of the objective function residuals. Main advantage of the algorithm is that it approximates matrix of q-second order derivatives with the first-order q-Jacobian matrix. For that reason, the algorithm is much faster than q-steepest descent algorithms. The convergence of q-GN method is assured only when the initial guess is close enough to the solution. In this paper the influence of the parameter q to the non-linear problem solving is presented through three examples. The results show that the q-GD algorithm finds an optimal solution and speeds up the iterative procedure.

中文翻译:

用于无约束非线性优化的 q-Gauss-Newton 方法

q-Gauss-Newton 算法是一种迭代过程,它基于目标函数残差的平方和误差的最小化来解决非线性无约束优化问题。该算法的主要优点是它用一阶 q-Jacobian 矩阵逼近 q-二阶导数矩阵。因此,该算法比 q-steepest 下降算法快得多。q-GN 方法的收敛性只有在初始猜测与解足够接近时才能保证。本文通过三个例子说明参数q对非线性问题求解的影响。结果表明,q-GD 算法找到了最优解并加快了迭代过程。
更新日期:2021-05-28
down
wechat
bug