-
Gradient methods with memory Optim. Methods Softw. (IF 1.431) Pub Date : 2021-01-13 Yurii Nesterov; Mihai I. Florea
ABSTRACT In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the information obtained at the previous iterations in order to accelerate the convergence towards the optimal solution. This information is used in the form of a piece-wise linear model of the objective function, which provides us with much better prediction abilities as compared with the standard
-
Sparktope: linear programs from algorithms Optim. Methods Softw. (IF 1.431) Pub Date : 2021-01-06 David Avis; David Bremner
In a recent paper, Avis, Bremner, Tiwary and Watanabe gave a method for constructing linear programs (LPs) based on algorithms written in a simple programming language called Sparks. If an algorithm produces the solution x to a problem in polynomial time and space then the LP constructed is also of polynomial size and its optimum solution contains x as well as a complete execution trace of the algorithm
-
Efficient numerical methods to solve sparse linear equations with application to PageRank Optim. Methods Softw. (IF 1.431) Pub Date : 2020-12-21 Anton Anikin; Alexander Gasnikov; Alexander Gornov; Dmitry Kamzolov; Yury Maximov; Yurii Nesterov
Over the last two decades, the PageRank problem has received increased interest from the academic community as an efficient tool to estimate web-page importance in information retrieval. Despite numerous developments, the design of efficient optimization algorithms for the PageRank problem is still a challenge. This paper proposes three new algorithms with a linear time complexity for solving the problem
-
On the Barzilai–Borwein gradient methods with structured secant equation for nonlinear least squares problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-12-16 Aliyu Muhammed Awwal; Poom Kumam; Lin Wang; Mahmoud Muhammad Yahaya; Hassan Mohammad
ABSTRACT We propose structured spectral gradient algorithms for solving nonlinear least squares problems based on a modified structured secant equation. The idea was to integrate more details of the Hessian of the objective function into the standardized spectral parameters with the goal of improving numerical efficiency. We safeguard the structured spectral parameters to avoid negative curvature search
-
Inexact basic tensor methods for some classes of convex optimization problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-12-10 Yurii Nesterov
ABSTRACT In this paper, we analyse the Basic Tensor Methods, which use approximate solutions of the auxiliary problems. The quality of this solution is described by the residual in the function value, which must be proportional to ϵ p + 1 p , where p ≥ 1 is the order of the method and ϵ is the desired accuracy in the main optimization problem. We analyse in details the auxiliary schemes for the third-
-
Robust piecewise linear L 1-regression via nonsmooth DC optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2020-12-08 Adil M. Bagirov; Sona Taheri; Napsu Karmitsa; Nargiz Sultanova; Soodabeh Asadi
ABSTRACT Piecewise linear L 1 -regression problem is formulated as an unconstrained difference of convex (DC) optimization problem and an algorithm for solving this problem is developed. Auxiliary problems are introduced to design an adaptive approach to generate a suitable piecewise linear regression model and starting points for solving the underlying DC optimization problems. The performance of
-
Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case Optim. Methods Softw. (IF 1.431) Pub Date : 2020-12-02 Francisco Facchinei; Vyacheskav Kungurtsev; Lorenzo Lampariello; Gesualdo Scutari
ABSTRACT In this paper, we first extend the diminishing stepsize method for nonconvex constrained problems presented in F. Facchinei, V. Kungurtsev, L. Lampariello and G. Scutari [Ghost penalties in nonconvex constrained optimization: Diminishing stepsizes and iteration complexity, To appear on Math. Oper. Res. 2020. Available at https://arxiv.org/abs/1709.03384.] to deal with equality constraints
-
A fully stochastic second-order trust region method Optim. Methods Softw. (IF 1.431) Pub Date : 2020-11-25 Frank E. Curtis; Rui Shi
ABSTRACT A stochastic second-order trust region method is proposed, which can be viewed as an extension of the trust-region-ish (TRish) algorithm proposed by Curtis et al. [A stochastic trust region algorithm based on careful step normalization. INFORMS J. Optim. 1(3) 200–220, 2019]. In each iteration, a search direction is computed by (approximately) solving a subproblem defined by stochastic gradient
-
A class of smooth exact penalty function methods for optimization problems with orthogonality constraints Optim. Methods Softw. (IF 1.431) Pub Date : 2020-11-24 Nachuan Xiao; Xin Liu; Ya-xiang Yuan
ABSTRACT Updating the augmented Lagrangian multiplier by closed-form expression yields efficient first-order infeasible approach for optimization problems with orthogonality constraints. Hence, parallelization becomes tractable in solving this type of problems. Inspired by this closed-form updating scheme, we propose a novel penalty function with compact convex constraints (PenC). We show that PenC
-
Global convergence of a new sufficient descent spectral three-term conjugate gradient class for large-scale optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2020-11-12 M. R. Eslahchi; S. Bojari
ABSTRACT To solve a large-scale unconstrained optimization problem, in this paper we propose a class of spectral three-term conjugate gradient methods. We indicate that the proposed class, in fact, generates sufficient descent directions and also fulfill Dai–Liao conjugacy condition. We prove the global convergence of the presented class for either uniformly convex or general smooth functions under
-
A new interior-point approach for large separable convex quadratic two-stage stochastic problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-11-03 Jordi Castro; Paula de la Lama-Zubirán
ABSTRACT Two-stage stochastic models give rise to very large optimization problems. Several approaches have been devised for efficiently solving them, including interior-point methods (IPMs). However, using IPMs, the linking columns associated with first-stage decisions cause excessive fill-in for the solution of the normal equations. This downside is usually alleviated if variable splitting is applied
-
Minimum point-overlap labelling* Optim. Methods Softw. (IF 1.431) Pub Date : 2020-10-30 Yuya Higashikawa; Keiko Imai; Takeharu Shiraga; Noriyoshi Sukegawa; Yusuke Yokosuka
In an application of map labelling to air-traffic control, labels should be placed with as few overlaps as possible since labels include important information about airplanes. Motivated by this application, de Berg and Gerrits (Comput. Geom. 2012) proposed a problem of maximizing the number of free labels (i.e. labels not intersecting with any other label) and developed approximation algorithms for
-
Γ-robust linear complementarity problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-10-14 Vanessa Krebs; Martin Schmidt
Complementarity problems are often used to compute equilibria made up of specifically coordinated solutions of different optimization problems. Specific examples are game-theoretic settings like the bimatrix game or energy market models like for electricity or natural gas. While optimization under uncertainties is rather well-developed, the field of equilibrium models represented by complementarity
-
Solving quadratic multi-leader-follower games by smoothing the follower's best response Optim. Methods Softw. (IF 1.431) Pub Date : 2020-10-12 Michael Herty; Sonja Steffensen; Anna Thünen
ABSTRACT We analyse the existence of Nash equilibria for a class of quadratic multi-leader-follower games using the nonsmooth best response function. To overcome the challenge of nonsmoothness, we pursue a smoothing approach resulting in a reformulation as a smooth Nash equilibrium problem. The existence and uniqueness of solutions are proven for all smoothing parameters. Accumulation points of Nash
-
Preference robust models in multivariate utility-based shortfall risk minimization Optim. Methods Softw. (IF 1.431) Pub Date : 2020-10-02 Yuan Zhang; Huifu Xu; Wei Wang
Utility-based shortfall risk measure (SR) has received increasing attentions over the past few years. Recently Delage et al. [Shortfall Risk Models When Information of Loss Function Is Incomplete, GERAD HEC, Montréal, 2018] consider a situation where a decision maker's true loss function in the definition of SR is unknown but it is possible to elicit a set of plausible utility functions with partial
-
Simultaneous iterative solutions for the trust-region and minimum eigenvalue subproblem Optim. Methods Softw. (IF 1.431) Pub Date : 2020-10-01 I. G. Akrotirianakis; M. Gratton; J. D. Griffin; S. Yektamaram; W. Zhou
ABSTRACT Given the inability to foresee all possible scenarios, it is justified to desire an efficient trust-region subproblem solver capable of delivering any desired level of accuracy on demand; that is, the accuracy obtainable for a given trust-region subproblem should not be partially dependent on the problem itself. Current state-of-the-art iterative eigensolvers all fall into the class of restarted
-
On the abs-polynomial expansion of piecewise smooth functions Optim. Methods Softw. (IF 1.431) Pub Date : 2020-10-01 A. Griewank; T. Streubel; C. Tischendorf
Tom Streubel has observed that for functions in abs-normal form, generalized Taylor expansions of arbitrary order d ¯ − 1 can be generated by algorithmic piecewise differentiation. Abs-normal form means that the real or vector valued function is defined by an evaluation procedure that involves the absolute value function | ⋅ | apart from arithmetic operations and d ¯ times continuously differentiable
-
Exploiting aggregate sparsity in second-order cone relaxations for quadratic constrained quadratic programming problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-29 Heejune Sheen; Makoto Yamashita
Among many approaches to increase the computational efficiency of semidefinite programming (SDP) relaxation for nonconvex quadratic constrained quadratic programming problems (QCQPs), exploiting the aggregate sparsity of the data matrices in the SDP by Fukuda et al. [Exploiting sparsity in semidefinite programming via matrix completion I: General framework, SIAM J. Optim. 11(3) (2001), pp. 647–674]
-
An ADMM-based interior-point method for large-scale linear programming Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-24 Tianyi Lin; Shiqian Ma; Yinyu Ye; Shuzhong Zhang
In this paper, we propose a new framework to implement interior point method (IPM) in order to solve some very large-scale linear programs (LPs). Traditional IPMs typically use Newton's method to approximately solve a subproblem that aims to minimize a log-barrier penalty function at each iteration. Due its connection to Newton's method, IPM is often classified as second-order method – a genre that
-
Tensor methods for finding approximate stationary points of convex functions Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-23 G. N. Grapiglia; Yurii Nesterov
In this paper, we consider the problem of finding ε-approximate stationary points of convex functions that are p-times differentiable with ν-Hölder continuous pth derivatives. We present tensor methods with and without acceleration. Specifically, we show that the non-accelerated schemes take at most O ϵ − 1 / ( p + ν − 1 ) iterations to reduce the norm of the gradient of the objective below given ϵ
-
Newton projection method as applied to assembly simulation Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-18 S. Baklanov; M. Stefanova; S. Lupuleac
In this paper, we consider Newton projection method for solving the quadratic programming problem that emerges in simulation of joining process for assembly with compliant parts. This particular class of problems has specific features such as an ill-conditioned Hessian and a sparse matrix of constraints as well as a requirement for the large-scale computations. We use the projected Newton method with
-
A DC approach for minimax fractional optimization programs with ratios of convex functions Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-16 A. Ghazi; A. Roubi
This paper deals with minimax fractional programs whose objective functions are the maximum of finite ratios of convex functions, with arbitrary convex constraints set. For such problems, Dinkelbach-type algorithms fail to work since the parametric subproblems may be nonconvex, whereas the latter need a global optimal solution of these subproblems. We give necessary optimality conditions for such problems
-
On basic operations related to network induction of discrete convex functions Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-15 Kazuo Murota
Discrete convex functions are used in many areas, including operations research, discrete-event systems, game theory, and economics. The objective of this paper is to investigate basic operations such as direct sum, splitting, and aggregation that are related to network induction of discrete convex functions as well as discrete convex sets. Various kinds of discrete convex functions in discrete convex
-
Improving dynamic programming for travelling salesman with precedence constraints: parallel Morin–Marsten bounding Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-14 Yaroslav. V. Salii; Andrey S. Sheka
The precedence constrained traveling salesman (TSP-PC), also known as sequential ordering problem (SOP), consists of finding an optimal tour that satisfies the namesake constraints. Mixed integer-linear programming works well with the ‘lightly constrained’ TSP-PCs, close to asymmetric TSP, as well as the with the ‘heavily constrained’ (Gouveia, Ruthmair, 2015). Dynamic programming (DP) works well with
-
Inexact SARAH algorithm for stochastic optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-14 Lam M. Nguyen; Katya Scheinberg; Martin Takáč
We develop and analyse a variant of the SARAH algorithm, which does not require computation of the exact gradient. Thus this new method can be applied to general expectation minimization problems rather than only finite sum problems. While the original SARAH algorithm, as well as its predecessor, SVRG, requires an exact gradient computation on each outer iteration, the inexact variant of SARAH (iSARAH)
-
Cooperative differential games with continuous updating using Hamilton–Jacobi–Bellman equation Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-10 Ovanes Petrosian; Anna Tur; Zeyang Wang; Hongwei Gao
This paper examines a class of cooperative differential games with continuous updating. Here it is assumed that at each time instant players have or use information about the game structure defined for a closed time interval with fixed duration. The current time continuously evolves with the updating interval. The main problems considered in a cooperative setting with continuous updating is how to
-
Gravity-magnetic cross-gradient joint inversion by the cyclic gradient method Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-13 Cong Sun; Yanfei Wang
In this paper, we consider a joint-inversion problem using different types of geophysical data: gravity and magnetism. We first formulate two kinds of inverse problems in the famework of the first kind Fredholm integral equations, and then build up a sparse inversion model combining the two inverse problems as well as the cross-gradient term. The cyclic gradient method for quadratic function minimization
-
A stochastic dual dynamic programming method for two-stage distributionally robust optimization problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-09-03 Xiaojiao Tong; Liu Yang; Xiao Luo; Bo Rao
This paper studies a class of two-stage distributionally robust optimization (TDRO) problems which comes from many practical application fields. In order to set up some implementable solution method, we first transfer the TDRO problem to its equivalent robust counterpart (RC) by the duality theorem of optimization. The RC reformulation of TDRO is a semi-infinite stochastic programming. Then we construct
-
COCO: a platform for comparing continuous optimizers in a black-box setting Optim. Methods Softw. (IF 1.431) Pub Date : 2020-08-25 Nikolaus Hansen; Anne Auger; Raymond Ros; Olaf Mersmann; Tea Tušar; Dimo Brockhoff
We introduce COCO, an open-source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective
-
A partitioned scheme for adjoint shape sensitivity analysis of fluid–structure interactions involving non-matching meshes Optim. Methods Softw. (IF 1.431) Pub Date : 2020-08-17 Reza Najian Asl; Ihar Antonau; Aditya Ghantasala; Wulf G. Dettmer; Roland Wüchner; Kai-Uwe Bletzinger
This work presents a partitioned solution procedure to compute shape gradients in fluid–structure interaction (FSI) using black-box adjoint solvers. Special attention is paid to project the gradients onto the undeformed configuration due to the mixed Lagrangian–Eulerian formulation of large-deformation FSI in this work. The adjoint FSI problem is partitioned as an assembly of well-known adjoint fluid
-
A primal-dual interior point trust-region method for nonlinear semidefinite programming Optim. Methods Softw. (IF 1.431) Pub Date : 2020-08-17 Hiroshi Yamashita; Hiroshi Yabe; Kouhei Harada
In this paper, we propose a primal-dual interior point trust-region method for solving nonlinear semidefinite programming problems. The method consists of the outer iteration (SDPIP) that finds a Karush–Kuhn–Tucker (KKT) point and the inner iteration (SDPTR) that calculates an approximate barrier KKT point. Algorithm SDPTR combines a commutative class of Newton-like directions with the steepest descent
-
EAGO.jl: easy advanced global optimization in Julia Optim. Methods Softw. (IF 1.431) Pub Date : 2020-08-09 M. E. Wilhelm; M. D. Stuber
An extensible open-source deterministic global optimizer (EAGO) programmed entirely in the Julia language is presented. EAGO was developed to serve the need for supporting higher-complexity user-defined functions (e.g. functions defined implicitly via algorithms) within optimization models. EAGO embeds a first-of-its-kind implementation of McCormick arithmetic in an Evaluator structure allowing for
-
Multistage stochastic programs with a random number of stages: dynamic programming equations, solution methods, and application to portfolio selection Optim. Methods Softw. (IF 1.431) Pub Date : 2020-08-10 Vincent Guigues
We introduce the class of multistage stochastic optimization problems with a random number of stages. For such problems, we show how to write dynamic programming equations and how to solve these equations using the Stochastic Dual Dynamic Programming algorithm. Finally, we consider a portfolio selection problem over an optimization period of random duration. For several instances of this problem, we
-
Projections onto the canonical simplex with additional linear inequalities Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-29 L. Adam; V. Mácha
We consider the distributionally robust optimization and show that computing the distributional worst-case is equivalent to computing the projection onto the canonical simplex with additional linear inequality. We consider several distance functions to measure the distance of distributions. We write the projections as optimization problems and show that they are equivalent to finding a zero of real-valued
-
A study of one-parameter regularization methods for mathematical programs with vanishing constraints Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-28 Tim Hoheisel; Blanca Pablos; Aram Pooladian; Alexandra Schwartz; Luke Steverango
Mathematical programs with vanishing constraints (MPVCs) are a class of nonlinear optimization problems with applications to various engineering problems such as truss topology design and robot motion planning. MPVCs are difficult problems from both a theoretical and numerical perspective: the combinatorial nature of the vanishing constraints often prevents standard constraint qualifications and optimality
-
Generalized derivatives of computer programs Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-26 Matthew R. Billingsley; Paul I. Barton
A method for evaluating lexicographical directional (LD)-derivatives of functional programs is presented, extending previous methods to programs containing conditional branches and loops. A language for imperative programs is given, and conditions under which LD-derivatives can be calculated automatically for conditional branches and loops are described, along with a full description of the source
-
On the complexity of solving feasibility problems with regularized models Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-13 E. G. Birgin; L. F. Bueno; J. M. Martínez
The complexity of solving feasibility problems is considered in this work. It is assumed that the constraints that define the problem can be divided into expensive and cheap constraints. At each iteration, the introduced method minimizes a regularized pth-order model of the sum of squares of the expensive constraints subject to the cheap constraints. Under a Hölder continuity property with constant
-
Extended formulations of lower-truncated transversal polymatroids Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-02 Hiroshi Imai; Keiko Imai; Hidefumi Hiraishi
Extended formulations of ( k , l ) -sparsity matroids defined on graphs with n vertices and m edges are investigated by Iwata et al. [Extended formulations for sparsity matroids, Math. Program. 158 (2016), pp. 565–574]. This note interprets results on ( k , l ) -lower-truncated transversal polymatroids by the first author in 1983, from the viewpoint of extended formulations, and shows the same O (
-
A Newton-bracketing method for a simple conic optimization problem Optim. Methods Softw. (IF 1.431) Pub Date : 2020-07-01 Sunyoung Kim; Masakazu Kojima; Kim-Chuan Toh
For the Lagrangian-DNN relaxation of quadratic optimization problems (QOPs), we propose a Newton-bracketing method to improve the performance of the bisection-projection method implemented in BBCPOP [ACM Tran. Softw., 45(3):34 (2019)]. The relaxation problem is converted into the problem of finding the largest zero y ∗ of a continuously differentiable (except at y ∗ ) convex function g : R → R such
-
On a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations Optim. Methods Softw. (IF 1.431) Pub Date : 2020-06-22 H. Calandra; S. Gratton; E. Riccietti; X. Vasseur
In this paper, we propose a new multilevel Levenberg–Marquardt optimizer for the training of artificial neural networks with quadratic loss function. This setting allows us to get further insight into the potential of multilevel optimization methods. Indeed, when the least squares problem arises from the training of artificial neural networks, the variables subject to optimization are not related by
-
Solving a continuous multifacility location problem by DC algorithms Optim. Methods Softw. (IF 1.431) Pub Date : 2020-05-29 Anuj Bajaj; Boris S. Mordukhovich; Nguyen Mau Nam; Tuyen Tran
The paper presents a new approach to solve multifacility location problems, which is based on mixed integer programming and algorithms for minimizing differences of convex (DC) functions. The main challenges for solving the multifacility location problems under consideration come from their intrinsic discrete, nonconvex, and nondifferentiable nature. We provide a reformulation of these problems as
-
A Pfaffian formula for matching polynomials of outerplanar graphs Optim. Methods Softw. (IF 1.431) Pub Date : 2020-05-21 Satoru Iwata
An outerplanar graph is a graph that can be drawn on the plane without crossing edges so that all vertices are on the infinite face. Most organic compounds have outerplanar graph structures. The number of matchings in the skeleton graphs of organic compounds is known as the topological index Z, introduced in the early 70s to investigate correlation between molecular structures and physical properties
-
Further results on sum-of-squares tensors Optim. Methods Softw. (IF 1.431) Pub Date : 2020-05-19 Haibin Chen; Liqun Qi; Yiju Wang; Guanglu Zhou
Sum-of-squares (SOS) tensors plays an important role in tensor positive definiteness and polynomial optimization. So it is important to figure out what kind of tensors are SOS tensors. In this paper, we first show that several types of even order symmetric tensors are SOS tensors. The inclusive relation between several types of existing SOS tensors are developed under suitable conditions. By exploring
-
Stochastic proximal linear method for structured non-convex problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-29 Tamir Hazan; Shoham Sabach; Sergey Voldman
In this work, motivated by the challenging task of learning a deep neural network, we consider optimization problems that consist of minimizing a finite-sum of non-convex and non-smooth functions, where the non-smoothness appears as the maximum of non-convex functions with Lipschitz continuous gradient. Due to the large size of the sum, in practice, we focus here on stochastic first-order methods and
-
Symmetric rank-1 approximation of symmetric high-order tensors Optim. Methods Softw. (IF 1.431) Pub Date : 2019-10-21 Leqin Wu; Xin Liu; Zaiwen Wen
Finding the symmetric rank-1 approximation to a given symmetric tensor is an important problem due to its wide applications and its close relationship to the Z-eigenpair of a tensor. In this paper, we propose a method based on the proximal alternating linearized minimization to directly solve the optimization problem. Global convergence of our algorithm is established. Numerical experiments show that
-
A convergent Newton algorithm for computing Z-eigenvalues of an almost nonnegative irreducible tensor Optim. Methods Softw. (IF 1.431) Pub Date : 2019-08-07 Xin Zhang; Qin Ni; Zhili Ge
In this paper, we compute Z-eigenvalues of a class of tensors by studying the properties of semi-symmetric tensor. We prove that Axm−1 is identical to zero if and only if As=0, where As is the associated semi-symmetric tensor of A. Based on the semi-symmetric property, an almost nonnegative irreducible tensor is defined. And we use Newton method to compute Z-eigenvalues of this kind of tensor. The
-
Computation of second-order directional stationary points for group sparse optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2019-11-04 Dingtao Peng; Xiaojun Chen
We consider a nonconvex and nonsmooth group sparse optimization problem where the penalty function is the sum of compositions of a folded concave function and the ℓ2 vector norm for each group variable. We show that under some mild conditions a first-order directional stationary point is a strict local minimizer that fulfils the first-order growth condition, and a second-order directional stationary
-
Stochastic polynomial optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2019-08-22 Jiawang Nie; Liu Yang; Suhan Zhong
This paper studies stochastic optimization problems with polynomials. We propose an optimization model with sample averages and perturbations. The Lasserre-type Moment-SOS relaxations are used to solve the sample average optimization. Properties of the optimization and its relaxations are studied. Numerical experiments are presented.
-
Error estimates for iterative algorithms for minimizing regularized quadratic subproblems Optim. Methods Softw. (IF 1.431) Pub Date : 2019-10-07 Nicholas I. M. Gould; Valeria Simoncini
We derive bounds for the objective errors and gradient residuals when finding approximations to the solution of common regularized quadratic optimization problems within evolving Krylov spaces. These provide upper bounds on the number of iterations required to achieve a given stated accuracy. We illustrate the quality of our bounds on given test examples.
-
ADMM for multiaffine constrained optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2019-11-06 Wenbo Gao; Donald Goldfarb; Frank E. Curtis
We expand the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka–Łojasiewicz (K–Ł) property holds, this
-
A concise second-order complexity analysis for unconstrained optimization using high-order regularized models Optim. Methods Softw. (IF 1.431) Pub Date : 2019-10-24 C. Cartis; N. I. M. Gould; Ph. L. Toint
An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p, p≥2, of the unconstrained objective function, and that is guaranteed to find a first- and second-order critical point in at most O(max{ϵ1−p+1p,ϵ2−p+1p−1}) function and derivatives evaluations, where ϵ1 and ϵ2 are prescribed first- and second-order optimality tolerances. This is a simple algorithm and
-
Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions Optim. Methods Softw. (IF 1.431) Pub Date : 2019-10-09 Azam Asl; Michael L. Overton
It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an analysis of the gradient method with steplengths satisfying the Armijo
-
Preface Optim. Methods Softw. (IF 1.431) Pub Date : 2020-01-09 Yu-Hong Dai; Xin Liu; Jiawang Nie; Zaiwen Wen
(2020). Preface. Optimization Methods and Software: Vol. 35, Part I of the special issue dedicated to the 60th birthday of Professor Ya-xiang Yuan. Guest-Editors: Yu-Hong Dai, Xin Liu, Jiawang Nie, Zaiwen Wen, pp. 221-222.
-
On inexact solution of auxiliary problems in tensor methods for convex optimization Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-23 G.N. Grapiglia; Yu. Nesterov
ABSTRACT In this paper, we study the auxiliary problems that appear in p-order tensor methods for unconstrained minimization of convex functions with ν-Hölder continuous pth derivatives. This type of auxiliary problems corresponds to the minimization of a ( p + ν ) -order regularization of the pth-order Taylor approximation of the objective. For the case p = 3, we consider the use of Gradient Methods
-
Exponential augmented Lagrangian methods for equilibrium problems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-22 E. M. R. Torrealba; L. C. Matioli; M. Nasri; R. A. Castillo
We introduce exponential augmented Lagrangian methods for solving equilibrium problems in finite-dimensional spaces, extending the so-called quadratic augmented Lagrangian methods. Unlike the quadratic augmented Lagrangian methods that are at most first-order differentiable, our exponential augmented Lagrangian methods can be differentiable at any order. Therefore, second-order methods, such as Newton's
-
On SOR-like iteration methods for solving weakly nonlinear systems Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-20 Yifen Ke; Changfeng Ma
In this paper, we introduce a class of SOR-like iteration methods for solving the systems of the weakly nonlinear equation, which is by reformulating equivalently the weakly nonlinear equation as a two-by-two block nonlinear equation. Two types of the global convergence theorems are given under suitable choices of the involved splitting matrix and parameter. Numerical results for the three-dimensional
-
A dual approach for optimal algorithms in distributed optimization over networks Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-17 César A. Uribe; Soomin Lee; Alexander Gasnikov; Angelia Nedić
We study dual-based algorithms for distributed convex optimization problems over networks, where the objective is to minimize a sum ∑ i = 1 m f i ( z ) of functions over in a network. We provide complexity bounds for four different cases, namely: each function f i is strongly convex and smooth, each function is either strongly convex or smooth, and when it is convex but neither strongly convex nor
-
Training GANs with centripetal acceleration Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-16 Wei Peng; Yu-Hong Dai; Hui Zhang; Lizhi Cheng
Training generative adversarial networks (GANs) often suffers from cyclic behaviours of iterates. Based on a simple intuition that the direction of centripetal acceleration of an object moving in uniform circular motion is toward the centre of the circle, we present the Simultaneous Centripetal Acceleration (SCA) method and the Alternating Centripetal Acceleration (ACA) method to alleviate the cyclic
-
Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes Optim. Methods Softw. (IF 1.431) Pub Date : 2020-04-13 Jinlong Lei; Uday V. Shanbhag
This work considers the minimization of a sum of an expectation-valued coordinate-wise smooth nonconvex function and a nonsmooth block-separable convex regularizer. We propose an asynchronous variance-reduced algorithm, where in each iteration, a single block is randomly chosen to update its estimates by a proximal variable sample-size stochastic gradient scheme, while the remaining blocks are kept
-
Complexity and performance of an Augmented Lagrangian algorithm Optim. Methods Softw. (IF 1.431) Pub Date : 2020-03-31 E. G. Birgin; J. M. Martínez
Algencan is a well established safeguarded Augmented Lagrangian algorithm introduced in [R. Andreani, E. G. Birgin, J. M. Martínez, and M. L. Schuverdt, On Augmented Lagrangian methods with general lower-level constraints, SIAM J. Optim. 18 (2008), pp. 1286–1309]. Complexity results that report its worst-case behaviour in terms of iterations and evaluations of functions and derivatives that are necessary
Contents have been reproduced by permission of the publishers.