当前位置: X-MOL 学术Optim. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cluster Gauss–Newton method
Optimization and Engineering ( IF 2.0 ) Pub Date : 2020-10-28 , DOI: 10.1007/s11081-020-09571-2
Yasunori Aoki , Ken Hayami , Kota Toshimoto , Yuichi Sugiyama

Parameter estimation problems of mathematical models can often be formulated as nonlinear least squares problems. Typically these problems are solved numerically using iterative methods. The local minimiser obtained using these iterative methods usually depends on the choice of the initial iterate. Thus, the estimated parameter and subsequent analyses using it depend on the choice of the initial iterate. One way to reduce the analysis bias due to the choice of the initial iterate is to repeat the algorithm from multiple initial iterates (i.e. use a multi-start method). However, the procedure can be computationally intensive and is not always used in practice. To overcome this problem, we propose the Cluster Gauss–Newton (CGN) method, an efficient algorithm for finding multiple approximate minimisers of nonlinear-least squares problems. CGN simultaneously solves the nonlinear least squares problem from multiple initial iterates. Then, CGN iteratively improves the approximations from these initial iterates similarly to the Gauss–Newton method. However, it uses a global linear approximation instead of the Jacobian. The global linear approximations are computed collectively among all the iterates to minimise the computational cost associated with the evaluation of the mathematical model. We use physiologically based pharmacokinetic (PBPK) models used in pharmaceutical drug development to demonstrate its use and show that CGN is computationally more efficient and more robust against local minima compared to the standard Levenberg–Marquardt method, as well as state-of-the art multi-start and derivative-free methods.



中文翻译:

簇高斯-牛顿法

数学模型的参数估计问题通常可以表述为非线性最小二乘问题。通常,使用迭代方法以数值方式解决这些问题。使用这些迭代方法获得的局部极小值通常取决于初始迭代的选择。因此,估计的参数和使用该参数的后续分析取决于初始迭代的选择。减少由于选择初始迭代而导致的分析偏差的一种方法是从多个初始迭代中重复算法(即,使用多次启动方法)。但是,该过程可能需要大量计算,并且在实践中并不总是使用。为了克服这个问题,我们提出了集群高斯-牛顿法(CGN),这是一种用于找到非线性最小二乘问题的多个近似极小值的有效算法。CGN同时从多个初始迭代中解决了非线性最小二乘问题。然后,CGN与高斯-牛顿法类似,迭代地改进了这些初始迭代的近似值。但是,它使用全局线性逼近而不是雅可比行列式。在所有迭代中共同计算全局线性逼近,以最大程度地减少与数学模型评估相关的计算成本。我们使用用于药物开发的基于生理的药代动力学(PBPK)模型来证明其用途,并表明与标准的Levenberg-Marquardt方法以及最新技术相比,CGN在计算上更有效且对局部极小值更为稳健多起点和无导数方法。

更新日期:2020-10-30
down
wechat
bug