Skip to main content
Log in

Automatic repair of convex optimization problems

  • Research Article
  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

Given an infeasible, unbounded, or pathological convex optimization problem, a natural question to ask is: what is the smallest change we can make to the problem’s parameters such that the problem becomes solvable? In this paper, we address this question by posing it as an optimization problem involving the minimization of a convex regularization function of the parameters, subject to the constraint that the parameters result in a solvable problem. We propose a heuristic for approximately solving this problem that is based on the penalty method and leverages recently developed methods that can efficiently evaluate the derivative of the solution of a convex cone program with respect to its parameters. We illustrate our method by applying it to examples in optimal control and economics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Agrawal A, Amos B, Barratt S, Boyd S, Diamond S, Kolter J.Z (2019a) Differentiable convex optimization layers. In: Advances in neural information processing systems, pp 9558–9570

  • Agrawal A, Barratt S, Boyd S, Busseti E, Moursi W (2019b) Differentiating through a cone program. J Appl Numer Optim 1(2):107–115

    Google Scholar 

  • Amaldi E (1994) From finding maximum feasible subsystems of linear systems to feedforward neural network design. Ph.D. thesis, Citeseer

  • Amaldi E, Pfetsch M, Trotter L (1999) Some structural and algorithmic properties of the maximum feasible subsystem problem. In: International conference on integer programming and combinatorial optimization. Springer, pp 45–59

  • Amaral P, JĂşdice J, Sherali H (2008) A reformulation–linearization–convexification algorithm for optimal correction of an inconsistent system of linear constraints. Comput Oper Res 35(5):1494–1509

    Article  MathSciNet  Google Scholar 

  • Barratt S, Boyd S (2019) Least squares auto-tuning. arXiv preprint arXiv:1904.05460

  • Ben-Tal A, Nemirovski A (2001) Lectures on modern convex optimization: analysis, algorithms, and engineering applications, vol 2. SIAM

  • Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Carver W (1922) Systems of linear inequalities. Ann Math 212–220

  • Chinneck J (1996) An effective polynomial-time heuristic for the minimum-cardinality IIS set-covering problem. Ann Math Artif Intell 17(1):127–144

    Article  MathSciNet  Google Scholar 

  • Chinneck J (1997) Finding a useful subset of constraints for analysis in an infeasible linear program. INFORMS J Comput 9(2):164–174

    Article  MathSciNet  Google Scholar 

  • Chinneck J (2001) Analyzing mathematical programs using MProbe. Ann Oper Res 104(1–4):33–48

    Article  MathSciNet  Google Scholar 

  • Chinneck J (2007) Feasibility and infeasibility in optimization: algorithms and computational methods, vol 118. Springer, Berlin

    MATH  Google Scholar 

  • Chinneck J, Dravnieks E (1991) Locating minimal infeasible constraint sets in linear programs. ORSA J Comput 3(2):157–168

    Article  Google Scholar 

  • Diamond S, Boyd S (2016) CVXPY: a Python-embedded modeling language for convex optimization. J Mach Learn Res 17(83):1–5

    MathSciNet  MATH  Google Scholar 

  • Gambella C, Marecek J, Mevissen M (2019) Projections onto the set of feasible inputs and the set of feasible solutions. In: Allerton conference on communication, control, and computing. IEEE, pp 937–943

  • Greenberg H (1987) ANALYZE: a computer-assisted analysis system for linear programming models. Oper Res Lett 6(5):249–255

    Article  Google Scholar 

  • Greenberg H (1987) Computer-assisted analysis for diagnosing infeasible or unbounded linear programs. In: Computation mathematical programming. Springer, pp 79–97

  • Greenberg H, Murphy F (1991) Approaches to diagnosing infeasible linear programs. ORSA J Comput 3(3):253–261

    Article  Google Scholar 

  • GUROBI Optimization (2019) Gurobi optimizer reference manual

  • IBM (2016) IBM ILOG CPLEX optimization studio CPLEX user’s manual

  • Karp R (1972) Reducibility among combinatorial problems. In: Complexity of computer computations, pp 85–103

  • Kellner K, Pfetsch M, Theobald T (2019) Irreducible infeasible subsystems of semidefinite systems. J Optim Theory Appl 181(3):727–742

    Article  MathSciNet  Google Scholar 

  • Kurator W, O’Neill R (1980) PERUSE: an interactive system for mathematical programs. ACM Trans Math Softw 6(4):489–509

    Article  Google Scholar 

  • Martinet B (1970) Brève communication. rĂ©gularisation d’inĂ©quations variationnelles par approximations successives. ESAIM: Mathematical Modelling and Numerical Analysis-ModĂ©lisation MathĂ©matique et Analyse NumĂ©rique 4(R3):154–158

    MATH  Google Scholar 

  • MOSEK Aps (2020) MOSEK optimizer API for Python. https://docs.mosek.com

  • Nesterov Y (2013) Gradient methods for minimizing composite functions. Math Program 140(1):125–161

    Article  MathSciNet  Google Scholar 

  • Obuchowska W (1998) Infeasibility analysis for systems of quadratic convex inequalities. Eur J Oper Res 107(3):633–643

    Article  Google Scholar 

  • Obuchowska W (1999) On infeasibility of systems of convex analytic inequalities. J Math Anal Appl 234(1):223–245

    Article  MathSciNet  Google Scholar 

  • O’Donoghue B, Chu E, Parikh N, Boyd S (2016) Conic optimization via operator splitting and homogeneous self-dual embedding. J Optim Theory Appl 169(3):1042–1068

    Article  MathSciNet  Google Scholar 

  • Parikh N, Boyd S (2014) Proximal algorithms. Found Trends® Optim 1(3):127–239. https://doi.org/10.1561/2400000003

    Article  Google Scholar 

  • Pfetsch M (2003) The maximum feasible subsystem problem and vertex-facet incidences of polyhedra. Ph.D. thesis

  • Pfetsch M (2008) Branch-and-cut for the maximum feasible subsystem problem. SIAM J Optim 19(1):21–38

    Article  MathSciNet  Google Scholar 

  • Roodman G (1979) Note–post-infeasibility analysis in linear programming. Manage Sci 25(9):916–922

    Article  MathSciNet  Google Scholar 

  • Sankaran J (1993) A note on resolving infeasibility in linear programs by constraint relaxation. Oper Res Lett 13(1):19–20

    Article  MathSciNet  Google Scholar 

  • Tamiz M, Mardle S, Jones D (1995) Resolving inconsistency in infeasible linear programmes. Technical report, School of Mathematical Studies, University of Portsmouth, UK

  • Tamiz M, Mardle S, Jones D (1996) Detecting IIS in infeasible linear programmes using techniques from goal programming. Comput Oper Res 23(2):113–119

    Article  Google Scholar 

  • Van Loon JNM (1981) Irreducibly inconsistent systems of linear inequalities. Eur J Oper Res 8(3):283–288

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

S. Boyd is an Engineering Subject Editor for the Optimization and Engineering journal. S. Barratt is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shane Barratt.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Convex formulation

Appendix: Convex formulation

In the case that A is a constant while b and c are affine functions of \(\theta\), we can write (9) as an equivalent convex optimization problem. In the linear case (i.e., when \(\mathcal {K}= {\text{ R }}_+^n\)), we can simply drop the strong duality requirement (which always holds in this case) and express (9) as

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} r(\theta )\\ \text{ subject } \text{ to } ~ &{} Ax + s = b(\theta )\\ &{} A^Ty + c(\theta ) = 0\\ &{} s \in \mathcal {K}, \quad y \in \mathcal {K}^*. \end{array} \end{aligned}$$

For more general cones \(\mathcal {K}\) (such as, e.g., the second order cone), a sufficient condition for strong duality is that there exist a feasible point in the interior of the cone. We can write this as, for example,

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} r(\theta )\\ \text{ subject } \text{ to } ~ &{} Ax + s = b(\theta )\\ &{} A^Ty + c(\theta ) = 0\\ &{} s \in \mathop {{\mathbf{int }}}\limits \mathcal {K}, \quad y \in \mathcal {K}^{*}. \end{array} \end{aligned}$$
(14)

(We could similarly constrain \(y\in \mathop {{\mathbf{int }}}\limits \mathcal {K}^{*}\) and \(s\in \mathcal {K}\).)

In general, optimizing over open constraint sets is challenging and these problems may not even have an optimal point, but, in practice (and for well-enough behaved r, e.g., r continuous) we can approximate the true optimal value of (6) by approximating the open set \(\mathop {{\mathbf{int }}}\limits \mathcal {K}\) as a sequence of closed sets \(\mathcal {K}_\varepsilon \subseteq \mathop {{\mathbf{int }}}\limits \mathcal {K}\) such that \(\mathcal {K}_\varepsilon \rightarrow \mathop {{\mathbf{int }}}\limits \mathcal {K}\) as \(\varepsilon \downarrow 0\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barratt, S., Angeris, G. & Boyd, S. Automatic repair of convex optimization problems. Optim Eng 22, 247–259 (2021). https://doi.org/10.1007/s11081-020-09508-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-020-09508-9

Keywords

Navigation