当前位置: X-MOL 学术Optim. Methods Softw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Inexact basic tensor methods for some classes of convex optimization problems
Optimization Methods & Software ( IF 1.4 ) Pub Date : 2020-12-10
Yurii Nesterov

ABSTRACT

In this paper, we analyse the Basic Tensor Methods, which use approximate solutions of the auxiliary problems. The quality of this solution is described by the residual in the function value, which must be proportional to ϵ p + 1 p , where p   1 is the order of the method and ϵ is the desired accuracy in the main optimization problem. We analyse in details the auxiliary schemes for the third- and second-order tensor methods. The auxiliary problems for the third-order scheme can be solved very efficiently by a linearly convergent gradient-type method with a preconditioner. The most expensive operation in this process is a preliminary factorization of the Hessian of the objective function. For solving the auxiliary problem for the second order scheme, we suggest two variants of the Fast Gradient Methods with restart, which converge as O ( 1 k 6 ) , where k is the iteration counter. Finally, we present the results of the preliminary computational experiments.



中文翻译:

某些凸优化问题的基本张量方法不精确

摘要

在本文中,我们分析了基本张量方法,该方法使用了辅助问题的近似解。函数值的残差描述了该解决方案的质量,该残差必须与 ϵ p + 1个 p ,在哪里 p   1个 是该方法的顺序和ε是在主优化问题的期望的精度。我们详细分析了三阶和二阶张量方法的辅助方案。带有预调节器的线性收敛梯度型方法可以非常有效地解决三阶方案的辅助问题。此过程中最昂贵的操作是对目标函数的Hessian进行预分解。为了解决二阶方案的辅助问题,我们建议带重启的快速梯度方法的两个变体,它们收敛为 Ø 1个 ķ 6 ,其中k是迭代计数器。最后,我们介绍了初步计算实验的结果。

更新日期:2020-12-10
down
wechat
bug