
显示样式: 排序: IF: - GO 导出
-
Secant Update generalized version of PSB: a new approach Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-12 Nicolas Boutet, Rob Haelterman, Joris Degroote
In optimization, one of the main challenges of the widely used family of Quasi-Newton methods is to find an estimate of the Hessian matrix as close as possible to the real matrix. In this paper, we develop a new update formula for the estimate of the Hessian starting from the Powell-Symetric-Broyden (PSB) formula and adding pieces of information from the previous steps of the optimization path. This
-
Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-12 Caroline Geiersbach, Teresa Scarinci
For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential
-
Gauss–Newton-type methods for bilevel optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-10 Jörg Fliege, Andrey Tin, Alain Zemkoho
This article studies Gauss–Newton-type methods for over-determined systems to find solutions to bilevel programming problems. To proceed, we use the lower-level value function reformulation of bilevel programs and consider necessary optimality conditions under appropriate assumptions. First, under strict complementarity for upper- and lower-level feasibility constraints, we prove the convergence of
-
Decomposition Algorithms for Some Deterministic and Two-Stage Stochastic Single-Leader Multi-Follower Games Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-05 Pedro Borges, Claudia Sagastizábal, Mikhail Solodov
We consider a certain class of hierarchical decision problems that can be viewed as single-leader multi-follower games, and be represented by a virtual market coordinator trying to set a price system for traded goods, according to some criterion that balances supply and demand. The objective function of the market coordinator involves the decisions of many agents, which are taken independently by solving
-
Using partial spectral information for block diagonal preconditioning of saddle-point systems Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-04 Alison Ramage, Daniel Ruiz, Annick Sartenaer, Charlotte Tannier
Considering saddle-point systems of the Karush–Kuhn–Tucker (KKT) form, we propose approximations of the “ideal” block diagonal preconditioner based on the exact Schur complement proposed by Murphy et al. (SIAM J Sci Comput 21(6):1969–1972, 2000). We focus on the case where the (1,1) block is symmetric and positive definite, but with a few very small eigenvalues that possibly affect the convergence
-
Tensor Z -eigenvalue complementarity problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-03 Meilan Zeng
This paper studies tensor Z-eigenvalue complementarity problems. We formulate the tensor Z-eigenvalue complementarity problem as constrained polynomial optimization, and propose a semidefinite relaxation algorithm for solving the complementarity Z-eigenvalues of tensors. For every tensor that has finitely many complementarity Z-eigenvalues, we can compute all of them and show that our algorithm has
-
Polyhedral approximations of the semidefinite cone and their application Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-02 Yuzhu Wang, Akihiro Tanaka, Akiko Yoshise
We develop techniques to construct a series of sparse polyhedral approximations of the semidefinite cone. Motivated by the semidefinite (SD) bases proposed by Tanaka and Yoshise (Ann Oper Res 265:155–182, 2018), we propose a simple expansion of SD bases so as to keep the sparsity of the matrices composing it. We prove that the polyhedral approximation using our expanded SD bases contains the set of
-
Theoretical and numerical comparison of the Karush–Kuhn–Tucker and value function reformulations in bilevel optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-02 Alain B. Zemkoho, Shenglong Zhou
The Karush–Kuhn–Tucker and value function (lower-level value function, to be precise) reformulations are the most common single-level transformations of the bilevel optimization problem. So far, these reformulations have either been studied independently or as a joint optimization problem in an attempt to take advantage of the best properties from each model. To the best of our knowledge, these reformulations
-
On the properties of the cosine measure and the uniform angle subspace Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-02 Rommel G. Regis
Consider a nonempty finite set of nonzero vectors \(S \subset \mathbb {R}^n\). The angle between a nonzero vector \(v \in \mathbb {R}^n\) and S is the smallest angle between v and an element of S. The cosine measure of S is the cosine of the largest possible angle between a nonzero vector \(v \in \mathbb {R}^n\) and S. The cosine measure provides a way of quantifying the positive spanning property
-
The Tikhonov regularization for vector equilibrium problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-02 Lam Quoc Anh, Tran Quoc Duy, Le Dung Muu, Truong Van Tri
We consider vector equilibrium problems in real Banach spaces and study their regularized problems. Based on cone continuity and generalized convexity properties of vector-valued mappings, we propose general conditions that guarantee existence of solutions to such problems in cases of monotonicity and nonmonotonicity. First, our study indicates that every Tikhonov trajectory converges to a solution
-
A proximal DC approach for quadratic assignment problem Comput. Optim. Appl. (IF 1.743) Pub Date : 2021-01-02 Zhuoxuan Jiang, Xinyuan Zhao, Chao Ding
In this paper, we show that the quadratic assignment problem (QAP) can be reformulated to an equivalent rank constrained doubly nonnegative (DNN) problem. Under the framework of the difference of convex functions (DC) approach, a semi-proximal DC algorithm is proposed for solving the relaxation of the rank constrained DNN problem whose subproblems can be solved by the semi-proximal augmented Lagrangian
-
On mixed-integer optimal control with constrained total variation of the integer control Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-12-07 Sebastian Sager, Clemens Zeile
The combinatorial integral approximation (CIA) decomposition suggests solving mixed-integer optimal control problems by solving one continuous nonlinear control problem and one mixed-integer linear program (MILP). Unrealistic frequent switching can be avoided by adding a constraint on the total variation to the MILP. Within this work, we present a fast heuristic way to solve this CIA problem and investigate
-
A bundle method for nonsmooth DC programming with application to chance-constrained problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-19 W. van Ackooij, S. Demassey, P. Javal, H. Morais, W. de Oliveira, B. Swaminathan
This work considers nonsmooth and nonconvex optimization problems whose objective and constraint functions are defined by difference-of-convex (DC) functions. We consider an infeasible bundle method based on the so-called improvement functions to compute critical points for problems of this class. Our algorithm neither employs penalization techniques nor solves subproblems with linearized constraints
-
Nonconvex robust programming via value-function optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-19 Ying Cui, Ziyu He, Jong-Shi Pang
Convex programming based robust optimization is an active research topic in the past two decades, partially because of its computational tractability for many classes of optimization problems and uncertainty sets. However, many problems arising from modern operations research and statistical learning applications are nonconvex even in the nominal case, let alone their robust counterpart. In this paper
-
Globalized inexact proximal Newton-type methods for nonconvex composite functions Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-16 Christian Kanzow, Theresa Lechner
Optimization problems with composite functions consist of an objective function which is the sum of a smooth and a (convex) nonsmooth term. This particular structure is exploited by the class of proximal gradient methods and some of their generalizations like proximal Newton and quasi-Newton methods. The current literature on these classes of methods almost exclusively considers the case where also
-
The Gauss–Seidel method for generalized Nash equilibrium problems of polynomials Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-12 Jiawang Nie, Xindong Tang, Lingling Xu
This paper concerns the generalized Nash equilibrium problem of polynomials (GNEPP). We apply the Gauss–Seidel method and Moment-SOS relaxations to solve GNEPPs. The convergence of the Gauss–Seidel method is known for some special GNEPPs, such as generalized potential games (GPGs). We give a sufficient condition for GPGs and propose a numerical certificate, based on Putinar’s Positivstellensatz. Numerical
-
Single-forward-step projective splitting: exploiting cocoercivity Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-09 Patrick R. Johnstone, Jonathan Eckstein
This work describes a new variant of projective splitting for solving maximal monotone inclusions and complicated convex optimization problems. In the new version, cocoercive operators can be processed with a single forward step per iteration. In the convex optimization context, cocoercivity is equivalent to Lipschitz differentiability. Prior forward-step versions of projective splitting did not fully
-
An interior point-proximal method of multipliers for convex quadratic programming Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-09 Spyridon Pougkakiotis, Jacek Gondzio
In this paper we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory
-
Implementing and modifying Broyden class updates for large scale optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-09 Martin Buhmann, Dirk Siegel
We consider Broyden class updates for large scale optimization problems in n dimensions, restricting attention to the case when the initial second derivative approximation is the identity matrix. Under this assumption we present an implementation of the Broyden class based on a coordinate transformation on each iteration. It requires only \(2nk + O(k^{2}) + O(n)\) multiplications on the kth iteration
-
An efficient algorithm for nonconvex-linear minimax optimization problem and its application in solving weighted maximin dispersion problem Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-09 Weiwei Pan, Jingjing Shen, Zi Xu
In this paper, we study the minimax optimization problem that is nonconvex in one variable and linear in the other variable, which is a special case of nonconvex-concave minimax problem, which has attracted significant attention lately due to their applications in modern machine learning tasks, signal processing and many other fields. We propose a new alternating gradient projection algorithm and prove
-
Decomposition and discrete approximation methods for solving two-stage distributionally robust optimization problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-11-04 Yannan Chen, Hailin Sun, Huifu Xu
Decomposition methods have been well studied for solving two-stage and multi-stage stochastic programming problems, see Rockafellar and Wets (Math. Oper. Res. 16:119–147, 1991), Ruszczyński and Shapiro (Stochastic Programming, Handbook in OR & MS, North-Holland Publishing Company, Amsterdam, 2003) and Ruszczyński (Math. Program. 79:333–353, 1997). In this paper, we propose an algorithmic framework
-
On the use of polynomial models in multiobjective directional direct search Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-10-12 C. P. Brás, A. L. Custódio
Polynomial interpolation or regression models are an important tool in Derivative-free Optimization, acting as surrogates of the real function. In this work, we propose the use of these models in the multiobjective framework of directional direct search, namely the one of Direct Multisearch. Previously evaluated points are used to build quadratic polynomial models, which are minimized in an attempt
-
Convergence study on strictly contractive Peaceman–Rachford splitting method for nonseparable convex minimization models with quadratic coupling terms Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-10-03 Peixuan Li, Yuan Shen, Suhong Jiang, Zehua Liu, Caihua Chen
The alternating direction method of multipliers (ADMM) and Peaceman Rachford splitting method (PRSM) are two popular splitting algorithms for solving large-scale separable convex optimization problems. Though problems with nonseparable structure appear frequently in practice, researches on splitting methods for these problems remain to be scarce. Very recently, Chen et al. (Math Program 173(1–2):37–77
-
Tractable ADMM schemes for computing KKT points and local minimizers for $$\ell _0$$ ℓ 0 -minimization problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-10-01 Yue Xie, Uday V. Shanbhag
We consider an \(\ell _0\)-minimization problem where \(f(x) + \gamma \Vert x\Vert _0\) is minimized over a polyhedral set and the \(\ell _0\)-norm regularizer implicitly emphasizes the sparsity of the solution. Such a setting captures a range of problems in image processing and statistical learning. Given the nonconvex and discontinuous nature of this norm, convex regularizers as substitutes are often
-
T-positive semidefiniteness of third-order symmetric tensors and T-semidefinite programming Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-30 Meng-Meng Zheng, Zheng-Hai Huang, Yong Wang
The T-product for third-order tensors has been used extensively in the literature. In this paper, we first introduce first-order and second-order T-derivatives for the multi-variable real-valued function with the tensor T-product. Inspired by an equivalent characterization of a twice continuously T-differentiable multi-variable real-valued function being convex, we present a definition of the T-positive
-
An accelerated active-set algorithm for a quadratic semidefinite program with general constraints Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-27 Chungen Shen, Yunlong Wang, Wenjuan Xue, Lei-Hong Zhang
In this paper, we are concerned with efficient algorithms for solving the least squares semidefinite programming which contains many equalities and inequalities constraints. Our proposed method is built upon its dual formulation and is a type of active-set approach. In particular, by exploiting the nonnegative constraints in the dual form, our method first uses the information from the Barzlai–Borwein
-
Properties of the delayed weighted gradient method Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-26 Roberto Andreani, Marcos Raydan
The delayed weighted gradient method, recently introduced in Oviedo-Leon (Comput Optim Appl 74:729–746, 2019), is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties
-
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-23 Nicolas Loizou, Peter Richtárik
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the
-
Accelerating convergence of the globalized Newton method to critical solutions of nonlinear equations Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-22 A. Fischer, A. F. Izmailov, M. V. Solodov
In the case of singular (and possibly even nonisolated) solutions of nonlinear equations, while superlinear convergence of the Newton method cannot be guaranteed, local linear convergence from large domains of starting points still holds under certain reasonable assumptions. We consider a linesearch globalization of the Newton method, combined with extrapolation and over-relaxation accelerating techniques
-
Issues on the use of a modified Bunch and Kaufman decomposition for large scale Newton’s equation Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-18 Andrea Caliciotti, Giovanni Fasano, Florian Potra, Massimo Roma
In this work, we deal with Truncated Newton methods for solving large scale (possibly nonconvex) unconstrained optimization problems. In particular, we consider the use of a modified Bunch and Kaufman factorization for solving the Newton equation, at each (outer) iteration of the method. The Bunch and Kaufman factorization of a tridiagonal matrix is an effective and stable matrix decomposition, which
-
A Lagrange multiplier method for semilinear elliptic state constrained optimal control problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-18 Veronika Karl, Ira Neitzel, Daniel Wachsmuth
In this paper we apply an augmented Lagrange method to a class of semilinear elliptic optimal control problems with pointwise state constraints. We show strong convergence of subsequences of the primal variables to a local solution of the original problem as well as weak convergence of the adjoint states and weak-* convergence of the multipliers associated to the state constraint. Moreover, we show
-
A sequential partial linearization algorithm for the symmetric eigenvalue complementarity problem Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-16 Masao Fukushima, Joaquim Júdice, Welington de Oliveira, Valentina Sessa
In this paper, we introduce a Sequential Partial Linearization (SPL) algorithm for finding a solution of the symmetric Eigenvalue Complementarity Problem (EiCP). The algorithm can also be used for the computation of a stationary point of a standard fractional quadratic program. A first version of the SPL algorithm employs a line search technique and possesses global convergence to a solution of the
-
A proximal-point outer approximation algorithm Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-09 Massimo De Mauri, Joris Gillis, Jan Swevers, Goele Pipeleers
Many engineering and scientific applications, e.g. resource allocation, control of hybrid systems, scheduling, etc., require the solution of mixed-integer non-linear problems (MINLPs). Problems of such class combine the high computational burden arising from considering discrete variables with the complexity of non-linear functions. As a consequence, the development of algorithms able to efficiently
-
Hybrid Riemannian conjugate gradient methods with global convergence properties Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-09-05 Hiroyuki Sakai, Hideaki Iiduka
This paper presents Riemannian conjugate gradient methods and global convergence analyses under the strong Wolfe conditions. The main idea of the proposed methods is to combine the good global convergence properties of the Dai–Yuan method with the efficient numerical performance of the Hestenes–Stiefel method. One of the proposed algorithms is a generalization to Riemannian manifolds of the hybrid
-
Convergence rates for an inexact ADMM applied to separable convex optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-08-25 William W. Hager, Hongchao Zhang
Convergence rates are established for an inexact accelerated alternating direction method of multipliers (I-ADMM) for general separable convex optimization with a linear constraint. Both ergodic and non-ergodic iterates are analyzed. Relative to the iteration number k, the convergence rate is \(\mathcal{{O}}(1/k)\) in a convex setting and \(\mathcal{{O}}(1/k^2)\) in a strongly convex setting. When
-
Expected residual minimization method for monotone stochastic tensor complementarity problem Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-08-19 Zhenyu Ming, Liping Zhang, Liqun Qi
In this paper, we first introduce a new class of structured tensors, named strictly positive semidefinite tensors, and show that a strictly positive semidefinite tensor is not an \(R_0\) tensor. We focus on the stochastic tensor complementarity problem (STCP), where the expectation of the involved tensor is a strictly positive semidefinite tensor. We denote such an STCP as a monotone STCP. Based on
-
Riemannian conjugate gradient methods with inverse retraction Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-08-17 Xiaojing Zhu, Hiroyuki Sato
We propose a new class of Riemannian conjugate gradient (CG) methods, in which inverse retraction is used instead of vector transport for search direction construction. In existing methods, differentiated retraction is often used for vector transport to move the previous search direction to the current tangent space. However, a different perspective is adopted here, motivated by the fact that inverse
-
Weak convergence of iterative methods for solving quasimonotone variational inequalities Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-08-07 Hongwei Liu; Jun Yang
In this work, we introduce self-adaptive methods for solving variational inequalities with Lipschitz continuous and quasimonotone mapping(or Lipschitz continuous mapping without monotonicity) in real Hilbert space. Under suitable assumptions, the convergence of algorithms are established without the knowledge of the Lipschitz constant of the mapping. The results obtained in this paper extend some recent
-
On the interplay between acceleration and identification for the proximal gradient algorithm Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-08-06 Gilles Bareilles; Franck Iutzeler
In this paper, we study the interplay between acceleration and structure identification for the proximal gradient algorithm. While acceleration is generally beneficial in terms of functional decrease, we report and analyze several cases where its interplay with identification has negative effects on the algorithm behavior (iterates oscillation, loss of structure, etc.). Then, we present a generic method
-
Consistent treatment of incompletely converged iterative linear solvers in reverse-mode algorithmic differentiation Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-08-03 Siamak Akbarzadeh; Jan Hückelheim; Jens-Dominik Müller
Algorithmic differentiation (AD) is a widely-used approach to compute derivatives of numerical models. Many numerical models include an iterative process to solve non-linear systems of equations. To improve efficiency and numerical stability, AD is typically not applied to the linear solvers. Instead, the differentiated linear solver call is replaced with hand-produced derivative code that exploits
-
Global optimization via inverse distance weighting and radial basis functions Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-27 Alberto Bemporad
Global optimization problems whose objective function is expensive to evaluate can be solved effectively by recursively fitting a surrogate function to function samples and minimizing an acquisition function to generate new samples. The acquisition step trades off between seeking for a new optimization vector where the surrogate is minimum (exploitation of the surrogate) and looking for regions of
-
On a numerical shape optimization approach for a class of free boundary problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-21 A. Boulkhemair; A. Chakib; A. Nachaoui; A. A. Niftiyev; A. Sadik
This paper is devoted to a numerical method for the approximation of a class of free boundary problems of Bernoulli’s type, reformulated as optimal shape design problems with appropriate shape functionals. We show the existence of the shape derivative of the cost functional on a class of admissible domains and compute its shape derivative by using the formula proposed in Boulkhemair (SIAM J Control
-
A new method based on the proximal bundle idea and gradient sampling technique for minimizing nonsmooth convex functions Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-20 M. Maleknia; M. Shamsi
In this paper, we combine the positive aspects of the gradient sampling (GS) and bundle methods, as the most efficient methods in nonsmooth optimization, to develop a robust method for solving unconstrained nonsmooth convex optimization problems. The main aim of the proposed method is to take advantage of both GS and bundle methods, meanwhile avoiding their drawbacks. At each iteration of this method
-
A variation of Broyden class methods using Householder adaptive transforms Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-14 S. Cipolla; C. Di Fiore; P. Zellini
In this work we introduce and study novel Quasi Newton minimization methods based on a Hessian approximation Broyden Class-type updating scheme, where a suitable matrix \(\tilde{B}_k\) is updated instead of the current Hessian approximation \(B_k\). We identify conditions which imply the convergence of the algorithm and, if exact line search is chosen, its quadratic termination. By a remarkable connection
-
The distance between convex sets with Minkowski sum structure: application to collision detection Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-13 Xiangfeng Wang; Junping Zhang; Wenxing Zhang
The distance between sets is a long-standing computational geometry problem. In robotics, the distance between convex sets with Minkowski sum structure plays a fundamental role in collision detection. However, it is typically nontrivial to be computed, even if the projection onto each component set admits explicit formula. In this paper, we explore the problem of calculating the distance between convex
-
An explicit Tikhonov algorithm for nested variational inequalities Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-06 Lorenzo Lampariello; Christoph Neumann; Jacopo M. Ricci; Simone Sagratella; Oliver Stein
We consider nested variational inequalities consisting in a (upper-level) variational inequality whose feasible set is given by the solution set of another (lower-level) variational inequality. Purely hierarchical convex bilevel optimization problems and certain multi-follower games are particular instances of nested variational inequalities. We present an explicit and ready-to-implement Tikhonov-type
-
Acceleration techniques for level bundle methods in weakly smooth convex constrained optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-06 Yunmei Chen; Xiaojing Ye; Wei Zhang
We develop a unified level-bundle method, called accelerated constrained level-bundle (ACLB) algorithm, for solving constrained convex optimization problems. where the objective and constraint functions can be nonsmooth, weakly smooth, and/or smooth. ACLB employs Nesterov’s accelerated gradient technique, and hence retains the iteration complexity as that of existing bundle-type methods if the objective
-
Make $$\ell _1$$ ℓ 1 regularization effective in training sparse CNN Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-07-04 Juncai He; Xiaodong Jia; Jinchao Xu; Lian Zhang; Liang Zhao
Compressed Sensing using \(\ell _1\) regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work. Following Xiao (J Mach Learn Res 11(Oct):2543–2596, 2010), We
-
Inverse point source location with the Helmholtz equation on a bounded domain Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-29 Konstantin Pieper; Bao Quoc Tang; Philip Trautmann; Daniel Walter
The problem of recovering acoustic sources, more specifically monopoles, from point-wise measurements of the corresponding acoustic pressure at a limited number of frequencies is addressed. To this purpose, a family of sparse optimization problems in measure space in combination with the Helmholtz equation on a bounded domain is considered. A weighted norm with unbounded weight near the observation
-
A second-order shape optimization algorithm for solving the exterior Bernoulli free boundary problem using a new boundary cost functional Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-25 Julius Fergy T. Rabago; Hideyuki Azegami
The exterior Bernoulli problem is rephrased into a shape optimization problem using a new type of objective function called the Dirichlet-data-gap cost function which measures the \(L^2\)-distance between the Dirichlet data of two state functions. The first-order shape derivative of the cost function is explicitly determined via the chain rule approach. Using the same technique, the second-order shape
-
Convergence study of indefinite proximal ADMM with a relaxation factor Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-23 Min Tao
The alternating direction method of multipliers (ADMM) is widely used to solve separable convex programming problems. At each iteration, the classical ADMM solves the subproblems exactly. For many problems arising from practical applications, it is usually impossible or too expensive to obtain the exact solution of a subproblem. To overcome this, a special proximal term is added to ease the solvability
-
Oracle-based algorithms for binary two-stage robust optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-23 Nicolas Kämmerling; Jannis Kurtz
In this work we study binary two-stage robust optimization problems with objective uncertainty. We present an algorithm to calculate efficiently lower bounds for the binary two-stage robust problem by solving alternately the underlying deterministic problem and an adversarial problem. For the deterministic problem any oracle can be used which returns an optimal solution for every possible scenario
-
On the resolution of misspecified convex optimization and monotone variational inequality problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-22 Hesam Ahmadi; Uday V. Shanbhag
We consider a misspecified optimization problem, requiring the minimization of a function \(f(\cdot;\theta ^*)\) over a closed and convex set X where \(\theta ^*\) is an unknown vector of parameters that may be learnt by a parallel learning process. Here, we develop coupled schemes that generate iterates \((x_k,\theta _k)\) as \(k \rightarrow \infty\), then \(x_k \rightarrow x^*\), a minimizer of \(f(\cdot;\theta
-
An augmented Lagrangian algorithm for multi-objective optimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-20 G. Cocchi; M. Lapucci
In this paper, we propose an adaptation of the classical augmented Lagrangian method for dealing with multi-objective optimization problems. Specifically, after a brief review of the literature, we give a suitable definition of Augmented Lagrangian for equality and inequality constrained multi-objective problems. We exploit this object in a general computational scheme that is proved to converge, under
-
On the complexity of a hybrid proximal extragradient projective method for solving monotone inclusion problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-15 Mauricio Romero Sicre
In a series of papers (Solodov and Svaiter in J Convex Anal 6(1):59–70, 1999; Set-Valued Anal 7(4):323–345, 1999; Numer Funct Anal Optim 22(7–8):1013–1035, 2001) Solodov and Svaiter introduced new inexact variants of the proximal point method with relative error tolerances. Point-wise and ergodic iteration-complexity bounds for one of these methods, namely the hybrid proximal extragradient method (1999)
-
A regularization method for constrained nonlinear least squares Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-14 Dominique Orban; Abel Soares Siqueira
We propose a regularization method for nonlinear least-squares problems with equality constraints. Our approach is modeled after those of Arreckx and Orban (SIAM J Optim 28(2):1613–1639, 2018. https://doi.org/10.1137/16M1088570) and Dehghani et al. (INFOR Inf Syst Oper Res, 2019. https://doi.org/10.1080/03155986.2018.1559428) and applies a selective regularization scheme that may be viewed as a reformulation
-
Inexact restoration with subsampled trust-region methods for finite-sum minimization Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-09 Stefania Bellavia; Nataša Krejić; Benedetta Morini
Convex and nonconvex finite-sum minimization arises in many scientific computing and machine learning applications. Recently, first-order and second-order methods where objective functions, gradients and Hessians are approximated by randomly sampling components of the sum have received great attention. We propose a new trust-region method which employs suitable approximations of the objective function
-
Nonlinear optimal control: a numerical scheme based on occupation measures and interval analysis Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-06-08 Nicolas Delanoue; Mehdi Lhommeau; Sébastien Lagrange
This paper presents an approximation scheme for optimal control problems using finite-dimensional linear programs and interval analysis. This is done in two parts. Following Vinter approach (SIAM J Control Optim 31(2):518–538, 1993) and using occupation measures, the optimal control problem is written into a linear programming problem of infinite-dimension (weak formulation). Thanks to Interval arithmetic
-
An active-set algorithmic framework for non-convex optimization problems over the simplex Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-05-16 Andrea Cristofari; Marianna De Santis; Stefano Lucidi; Francesco Rinaldi
In this paper, we describe a new active-set algorithmic framework for minimizing a non-convex function over the unit simplex. At each iteration, the method makes use of a rule for identifying active variables (i.e., variables that are zero at a stationary point) and specific directions (that we name active-set gradient related directions) satisfying a new “nonorthogonality” type of condition. We prove
-
Convergence rates of subgradient methods for quasi-convex optimization problems Comput. Optim. Appl. (IF 1.743) Pub Date : 2020-05-15 Yaohua Hu; Jiawen Li; Carisa Kwok Wai Yu
Quasi-convex optimization acts a pivotal part in many fields including economics and finance; the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In this paper, we investigate the quantitative convergence theory, including the iteration complexity and convergence rates, of various subgradient methods for solving quasi-convex optimization