当前位置: X-MOL 学术SIAM J. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Boosted Difference of Convex Functions Algorithm for Nonsmooth Functions
SIAM Journal on Optimization ( IF 2.6 ) Pub Date : 2020-03-23 , DOI: 10.1137/18m123339x
Francisco J. Aragón Artacho , Phan T. Vuong

SIAM Journal on Optimization, Volume 30, Issue 1, Page 980-1006, January 2020.
The boosted difference of convex functions algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical difference of convex functions algorithm (DCA) thanks to an additional line search step. The purpose of this paper is twofold. First, we show that this scheme can be generalized and successfully applied to certain types of nonsmooth DC functions, namely, those that can be expressed as the difference of a smooth function and a possibly nonsmooth one. Second, we show that there is complete freedom in the choice of the trial step size for the line search, which is something that can further improve its performance. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergence rate of the iterations are obtained under the Kurdyka--Łojasiewicz property. Applications and numerical experiments for two problems in data science are presented, demonstrating that BDCA outperforms DCA. Specifically, for the minimum sum-of-squares clustering problem, BDCA was on average 16 times faster than DCA, and for the multidimensional scaling problem, BDCA was 3 times faster than DCA.


中文翻译:

非光滑函数的凸函数算法的提升差分

SIAM优化杂志,第30卷,第1期,第980-1006页,2020年1月。
最近提出了凸函数凸差算法(BDCA),以最小化凸函数(DC)的平滑差。BDCA通过额外的行搜索步骤,加快了凸函数算法(DCA)的经典差异的收敛速度。本文的目的是双重的。首先,我们证明了该方案可以推广并成功地应用于某些类型的非平滑DC函数,即那些可以表示为平滑函数与可能的非平滑函数之差的函数。其次,我们表明,对于行搜索,可以完全自由地选择试验步长,这可以进一步提高其性能。我们证明BDCA迭代序列的任何极限点都是正在考虑的问题的关键点,并且相应的目标值是单调递减和收敛的。迭代的全局收敛性和收敛速度是在Kurdyka-Łojasiewicz属性下获得的。介绍了数据科学中两个问题的应用和数值实验,表明BDCA优于DCA。具体来说,对于最小平方和聚类问题,BDCA平均比DCA快16倍,而对于多维缩放问题,BDCA比DCA快3倍。介绍了数据科学中两个问题的应用和数值实验,表明BDCA优于DCA。具体来说,对于最小平方和聚类问题,BDCA平均比DCA快16倍,而对于多维缩放问题,BDCA比DCA快3倍。介绍了数据科学中两个问题的应用和数值实验,表明BDCA优于DCA。具体来说,对于最小平方和聚类问题,BDCA平均比DCA快16倍,而对于多维缩放问题,BDCA比DCA快3倍。
更新日期:2020-03-23
down
wechat
bug