Skip to main content
Log in

A Stochastic Primal-Dual Method for Optimization with Conditional Value at Risk Constraints

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We study a first-order primal-dual subgradient method to optimize risk-constrained risk-penalized optimization problems, where risk is modeled via the popular conditional value at risk (CVaR) measure. The algorithm processes independent and identically distributed samples from the underlying uncertainty in an online fashion and produces an \(\eta /\sqrt{K}\)-approximately feasible and \(\eta /\sqrt{K}\)-approximately optimal point within K iterations with constant step-size, where \(\eta \) increases with tunable risk-parameters of CVaR. We find optimized step sizes using our bounds and precisely characterize the computational cost of risk aversion as revealed by the growth in \(\eta \). Our proposed algorithm makes a simple modification to a typical primal-dual stochastic subgradient algorithm. With this mild change, our analysis surprisingly obviates the need to impose a priori bounds or complex adaptive bounding schemes for dual variables to execute the algorithm as assumed in many prior works. We also draw interesting parallels in sample complexity with that for chance-constrained programs derived in the literature with a very different solution architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. For the unconstrained problem, variance-reduced stochastic gradient descent methods can efficiently minimize the resulting finite sum as in [19, 38].

  2. requires that we sample \(\omega \) once more for the z-update.

  3. The integrality of K is ignored for notational convenience.

  4. Lemma 2.1 provides sufficient conditions for the existence of such a saddle point.

  5. \(\mathrm{CVaR}\) of any random variable can only vary between the mean and the maximum value that random variable can take.

References

  1. Ahmadi-Javid, A.: Entropic value-at-risk: a new coherent risk measure. J. Optim. Theory Appl. 155(3), 1105–1123 (2012)

    Article  MathSciNet  Google Scholar 

  2. Baes, M., Bürgisser, M., Nemirovski, A.: A randomized mirror-prox method for solving structured large-scale matrix saddle-point problems. SIAM J. Optim. 23(2), 934–962 (2013)

    Article  MathSciNet  Google Scholar 

  3. Bedi, A.S., Koppel, A., Rajawat, K.: Nonparametric compositional stochastic optimization. arXiv preprint arXiv:1902.06011 (2019)

  4. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization, vol. 28. Princeton University Press, Princeton (2009)

    Book  Google Scholar 

  5. Bertsekas, D.P.: Stochastic optimization problems with nondifferentiable cost functionals. J. Optim. Theory Appl. 12(2), 218–231 (1973)

    Article  MathSciNet  Google Scholar 

  6. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, Berlin (2013)

    MATH  Google Scholar 

  7. Boob, D., Deng, Q., Lan, G.: Stochastic first-order methods for convex and nonconvex functional constrained optimization. arXiv preprint arXiv:1908.02734 (2019)

  8. Borkar, V.S., Meyn, S.P.: The ode method for convergence of stochastic approximation and reinforcement learning. SIAM J. Control Optim. 38(2), 447–469 (2000)

    Article  MathSciNet  Google Scholar 

  9. Boyd, S., Mutapcic, A.: Subgradient methods. Lecture notes of EE364b, Stanford University, Winter Quarter 2007 (2006)

  10. Calafiore, G., Campi, M.C.: Uncertain convex programs: randomized solutions and confidence levels. Math. Program. 102(1), 25–46 (2005)

    Article  MathSciNet  Google Scholar 

  11. Campi, M.C., Garatti, S.: The exact feasibility of randomized solutions of uncertain convex programs. SIAM J. Optim. 19(3), 1211–1230 (2008)

    Article  MathSciNet  Google Scholar 

  12. Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag. Sci. 6(1), 73–79 (1959)

    Article  MathSciNet  Google Scholar 

  13. Doan, T.T., Bose, S., Nguyen, D.H., Beck, C.L.: Convergence of the iterates in mirror descent methods. IEEE Control Syst. Lett. 3(1), 114–119 (2018)

    Article  MathSciNet  Google Scholar 

  14. Dominguez-Garcia, A.D., Hadjicostis, C.N.: Distributed matrix scaling and application to average consensus in directed graphs. IEEE Trans. Autom. Control 58(3), 667–681 (2013)

    Article  MathSciNet  Google Scholar 

  15. Ermoliev, Y.M.: Methods of stochastic programming (1976)

  16. Hadjiyiannis, M.J., Goulart, P.J., Kuhn, D.: An efficient method to estimate the suboptimality of affine controllers. IEEE Trans. Autom. Control 56(12), 2841–2853 (2011)

    Article  MathSciNet  Google Scholar 

  17. Hanasusanto, G.A., Kuhn, D., Wiesemann, W.: A comment on “computational complexity of stochastic programming problems”. Math. Program. 159(1–2), 557–569 (2016)

    Article  MathSciNet  Google Scholar 

  18. Hiriart-Urruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms I: Fundamentals, vol. 305. Springer, Berlin (2013)

    MATH  Google Scholar 

  19. Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems, pp. 315–323 (2013)

  20. Kalogerias, D.S., Powell, W.B.: Recursive optimization of convex risk measures: mean-semideviation models. arXiv preprint arXiv:1804.00636 (2018)

  21. Kiefer, J., Wolfowitz, J., et al.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462–466 (1952)

    Article  MathSciNet  Google Scholar 

  22. Kisiala, J.: Conditional value-at-risk: theory and applications. arXiv preprint arXiv:1511.00140 (2015)

  23. Koppel, A., Sadler, B.M., Ribeiro, A.: Proximity without consensus in online multiagent optimization. IEEE Trans. Signal Process. 65(12), 3062–3077 (2017)

    Article  MathSciNet  Google Scholar 

  24. Kushner, H., Yin, G.G.: Stochastic Approximation and Recursive Algorithms and Applications, vol. 35. Springer, Berlin (2003)

    MATH  Google Scholar 

  25. Mafusalov, A., Uryasev, S.: Buffered probability of exceedance: mathematical properties and optimization. SIAM J. Optim. 28(2), 1077–1103 (2018)

    Article  MathSciNet  Google Scholar 

  26. Mahdavi, M., Jin, R., Yang, T.: Trading regret for efficiency: online convex optimization with long term constraints. J. Mach. Learn. Res. 13(1), 2503–2528 (2012)

    MathSciNet  MATH  Google Scholar 

  27. Miller, C.W., Yang, I.: Optimal control of conditional value-at-risk in continuous time. SIAM J. Control. Optim. 55(2), 856–884 (2017)

    Article  MathSciNet  Google Scholar 

  28. Nedić, A., Lee, S.: On stochastic subgradient mirror-descent algorithm with weighted averaging. SIAM J. Optim. 24(1), 84–107 (2014)

    Article  MathSciNet  Google Scholar 

  29. Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    Article  MathSciNet  Google Scholar 

  30. Nedić, A., Ozdaglar, A.: Subgradient methods for saddle-point problems. J. Optim. Theory Appl. 142(1), 205–228 (2009)

    Article  MathSciNet  Google Scholar 

  31. Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    Article  MathSciNet  Google Scholar 

  32. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer (2004)

  33. Ogryczak, W., Ruszczyński, A.: From stochastic dominance to mean-risk models: semideviations as risk measures. Eur. J. Oper. Res. 116(1), 33–50 (1999)

    Article  Google Scholar 

  34. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    Article  MathSciNet  Google Scholar 

  35. Robbins, H., Siegmund, D.: A convergence theorem for non negative almost supermartingales and some applications. In: Optimizing Methods in Statistics, pp. 233–257. Elsevier (1971)

  36. Rockafellar, R.T., Uryasev, S.: Conditional value-at-risk for general loss distributions. J. Bank. Finance 26(7), 1443–1471 (2002)

    Article  Google Scholar 

  37. Ruszczyński, A., Shapiro, A.: Optimization of convex risk functions. Math. Oper. Res. 31(3), 433–452 (2006)

    Article  MathSciNet  Google Scholar 

  38. Schmidt, M., Le Roux, N., Bach, F.: Minimizing finite sums with the stochastic average gradient. Math. Program. 162(1–2), 83–112 (2017)

    Article  MathSciNet  Google Scholar 

  39. Shapiro, A., Philpott, A.: A tutorial on stochastic programming. Manuscript. Available at www2.isye.gatech.edu/ashapiro/publications.html17 (2007)

  40. Skaf, J., Boyd, S.P.: Design of affine controllers via convex optimization. IEEE Trans. Autom. Control 55(11), 2476–2487 (2010)

    Article  MathSciNet  Google Scholar 

  41. Sun, T., Sun, Y., Yin, W.: On Markov chain gradient descent. In: Advances in Neural Information Processing Systems, pp. 9896–9905 (2018)

  42. Xu, Y.: Primal-dual stochastic gradient method for convex programs with many functional constraints. arXiv preprint arXiv:1802.02724v1 (2018)

  43. Yamashita, S., Hatanaka, T., Yamauchi, J., Fujita, M.: Passivity-based generalization of primal-dual dynamics for non-strictly convex cost functions. Automatica 112, 108712 (2020)

    Article  MathSciNet  Google Scholar 

  44. Yu, H., Neely, M., Wei, X.: Online convex optimization with stochastic constraints. In: Advances in Neural Information Processing Systems, pp. 1428–1438 (2017)

  45. Zhang, T., Uryasev, S., Guan, Y.: Derivatives and subderivatives of buffered probability of exceedance. Oper. Res. Lett. 47(2), 130–132 (2019)

    Article  MathSciNet  Google Scholar 

  46. Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 928–936 (2003)

Download references

Acknowledgements

We thank Eilyan Bitar, Rayadurgam Srikant, Tamer Başar, and Stan Uryasev for helpful discussions. This work was partially supported by the National Science Foundation under grant no. CAREER-2048065, the International Institute of Carbon-Neutral Energy Research (\(\hbox {I}^2\)CNER), and the Power System Engineering Research Center (PSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Avinash N. Madavan.

Additional information

Communicated by Xiaolu Tan.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Madavan, A.N., Bose, S. A Stochastic Primal-Dual Method for Optimization with Conditional Value at Risk Constraints. J Optim Theory Appl 190, 428–460 (2021). https://doi.org/10.1007/s10957-021-01888-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-021-01888-x

Keywords

Mathematics Subject Classification

Navigation