Skip to main content
Log in

Learning chordal extensions

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

A highly influential ingredient of many techniques designed to exploit sparsity in numerical optimization is the so-called chordal extension of a graph representation of the optimization problem. The definitive relation between chordal extension and the performance of the optimization algorithm that uses the extension is not a mathematically understood task. For this reason, we follow the current research trend of looking at Combinatorial Optimization tasks by using a Machine Learning lens, and we devise a framework for learning elimination rules yielding high-quality chordal extensions. As a first building block of the learning framework, we propose an imitation learning scheme that mimics the elimination ordering provided by an expert rule. Results show that our imitation learning approach is effective in learning two classical elimination rules: the minimum degree and minimum fill-in heuristics, using simple Graph Neural Network models with only a handful of parameters. Moreover, the learned policies display remarkable generalization performance, across both graphs of larger size, and graphs from a different distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. SuiteSparse matrix collection was formerly known as the University of Florida sparse matrix collection.

  2. https://github.com/ds4dm/GraphRL.

References

  1. Agler, J., Helton, W., McCullough, S., Rodman, L.: Positive semidefinite matrices with a given sparsity pattern. Linear Algebra Appl. 107, 101–149 (1988). https://doi.org/10.1016/0024-3795(88)90240-6

    Article  MathSciNet  MATH  Google Scholar 

  2. Bagnell, J., Chestnutt, J., Bradley, D.M., Ratliff, N.D.: Boosting structured prediction for imitation learning. In: Advances in Neural Information Processing Systems, pp. 1153–1160 (2007)

  3. Bengio, Y., Lodi, A., Prouvost, A.: Machine learning for combinatorial optimization: a methodological tour d’horizon. arXiv preprint arXiv:1811.06128 (2018)

  4. Bergman, D., Cardonha, C.H., Cire, A.A., Raghunathan, A.U.: On the minimum chordal completion polytope. Oper. Res. 67(2), 532–547 (2019). https://doi.org/10.1287/opre.2018.1783

    Article  MathSciNet  MATH  Google Scholar 

  5. Bixby, R.E.: Solving real-world linear programs: a decade and more of progress. Oper. Res. 50(1), 3–15 (2002). https://doi.org/10.1287/opre.50.1.3.17780

    Article  MathSciNet  MATH  Google Scholar 

  6. Davis, T.A., Hu, Y.: The university of florida sparse matrix collection. ACM Trans. Math. Softw. 38(1), 1:1–1:25 (2011). https://doi.org/10.1145/2049662.2049663

    Article  MathSciNet  MATH  Google Scholar 

  7. Duvenaud, D.K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., Aspuru-Guzik, A., Adams, R.P.: Convolutional networks on graphs for learning molecular fingerprints. In: Advances in Neural Information Processing Systems, pp. 2224–2232 (2015)

  8. Fomin, F.V., Philip, G., Villanger, Y.: Minimum fill-in of sparse graphs: Kernelization and approximation. Algorithmica 71(1), 1–20 (2015). https://doi.org/10.1007/s00453-013-9776-1

    Article  MathSciNet  MATH  Google Scholar 

  9. Fukuda, M., Kojima, M., Murota, K., Nakata, K.: Exploiting sparsity in semidefinite programming via matrix completion I: general framework. SIAM J. Optim. 11(3), 647–674 (2001)

    Article  MathSciNet  Google Scholar 

  10. Garstka, M., Cannon, M., Goulart, P.: A Clique Graph Based Merging Strategy for Decomposable SDPS (2019)

  11. Gasse, M., Chételat, D., Ferroni, N., Charlin, L., Lodi, A.: Exact combinatorial optimization with graph convolutional neural networks. arXiv preprint arXiv:1906.01629 (2019)

  12. George, A.: Nested dissection of a regular finite element mesh. SIAM J. Numer. Anal. 10(2), 345–363 (1973). https://doi.org/10.1137/0710032

    Article  MathSciNet  MATH  Google Scholar 

  13. George, A., Liu, J.W.: The evolution of the minimum degree ordering algorithm. SIAM Rev. 31(1), 1–19 (1989). https://doi.org/10.1137/1031001

    Article  MathSciNet  MATH  Google Scholar 

  14. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256 (2010)

  15. Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of 2005 IEEE International Joint Conference on Neural Networks, 2005., vol. 2, pp. 729–734. IEEE (2005)

  16. Guermouche, A., L’Excellent, J.Y., Utard, G.: Impact of reordering on the memory of a multifrontal solver. Parallel Comput. Parallel Matrix Algorithms Appl. 29(9), 1191–1218 (2003). https://doi.org/10.1016/S0167-8191(03)00099-1

    Article  MathSciNet  Google Scholar 

  17. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems, pp. 1024–1034 (2017)

  18. Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584 (2017)

  19. Howard, R.A.: Dynamic programming and Markov processes. (1960)

  20. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  21. Kullback, S.: Information theory and statistics. Courier Corporation (1997)

  22. Li, Z., Chen, Q., Koltun, V.: Combinatorial optimization with graph convolutional networks and guided tree search. In: Advances in Neural Information Processing Systems, pp. 539–548 (2018)

  23. Liu, J.W.: The role of elimination trees in sparse factorization. SIAM J. Matrix Anal. Appl. 11(1), 134–172 (1990). https://doi.org/10.1137/0611010

    Article  MathSciNet  MATH  Google Scholar 

  24. Majumdar, A., Hall, G., Ahmadi, A.A.: A survey of recent scalability improvements for semidefinite programming with applications in machine learning, control, and robotics. arXiv preprint arXiv:1908.05209 (2019)

  25. Markowitz, H.M.: The elimination form of the inverse and its application to linear programming. Manag. Sci. 3(3), 255–269 (1957). https://doi.org/10.1287/mnsc.3.3.255

    Article  MathSciNet  MATH  Google Scholar 

  26. Nakata, K., Fujisawa, K., Fukuda, M., Kojima, M., Murota, K.: Exploiting sparsity in semidefinite programming via matrix completion II: implementation and numerical results. Math. Progr. 95(2), 303–327 (2003)

    Article  MathSciNet  Google Scholar 

  27. Pan, Y., Cheng, C.A., Saigol, K., Lee, K., Yan, X., Theodorou, E., Boots, B.: Agile autonomous driving using end-to-end deep imitation learning. In: Robotics: Science and Systems (2018)

  28. Ratliff, N.D., Silver, D., Bagnell, J.A.: Learning to search: functional gradient techniques for imitation learning. Auton. Robots 27(1), 25–53 (2009)

    Article  Google Scholar 

  29. Rose, D.J.: A graph-theoretic study of the numerical solution of sparse positive definite systems of linear equations. Graph Theory and Computing, pp. 183–217. Elsevier, New York (1972)

    Chapter  Google Scholar 

  30. Ross, S., Bagnell, D.: Efficient reductions for imitation learning. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 661–668 (2010)

  31. Ross, S., Gordon, G., Bagnell, D.: A reduction of imitation learning and structured prediction to no-regret online learning. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 627–635 (2011)

  32. Rothberg, E., Hendrickson, B.: Sparse matrix ordering methods for interior point linear programming. INFORMS J. Comput. 10(1), 107–113 (1998). https://doi.org/10.1287/ijoc.10.1.107

    Article  MathSciNet  Google Scholar 

  33. Schaal, S.: Is imitation learning the route to humanoid robots? Trends Cognit. Sci. 3(6), 233–242 (1999)

    Article  Google Scholar 

  34. Silver, D., Bagnell, J., Stentz, A.: High performance outdoor navigation from overhead data using imitation learning. Robotics: Science and Systems IV, Zurich, Switzerland (2008)

  35. Sliwak, J., Anjos, M., Létocart, L., Maeght, J., Traversi, E.: Improving clique decompositions of semidefinite relaxations for optimal power flow problems. EasyChair Preprint no. 2546 (EasyChair, 2020)

  36. Vandenberghe, L., Andersen, M.S., et al.: Chordal graphs and semidefinite optimization. Found. Trends® Optim. 1(4), 241–433 (2015)

    Article  Google Scholar 

  37. Vanderbei, R.J.: LOQO: an interior point code for quadratic programming. Optim. Methods Softw. 11(1–4), 451–484 (1999). https://doi.org/10.1080/10556789908805759

    Article  MathSciNet  MATH  Google Scholar 

  38. Wright, S.: Primal-dual interior-point methods. Soc. Ind. Appl. Math. (1997). https://doi.org/10.1137/1.9781611971453

    Article  MATH  Google Scholar 

  39. Yannakakis, M.: Computing the minimum fill-in is NP-complete. SIAM J. Algebraic Discrete Methods 2(1), 77–79 (1981). https://doi.org/10.1137/0602010

    Article  MathSciNet  MATH  Google Scholar 

  40. Ying, Z., Bourgeois, D., You, J., Zitnik, M., Leskovec, J.: Gnnexplainer: Generating explanations for graph neural networks. In: H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, R. Garnett (eds.) Advances in Neural Information Processing Systems 32, pp. 9244–9255. Curran Associates, Inc. (2019)

  41. Zheng, Y., Fantuzzi, G.: Sum-of-squares chordal decomposition of polynomial matrix inequalities. arXiv:2007.11410 [cs, eess, math] (2020)

  42. Zheng, Y., Fantuzzi, G., Papachristodoulou, A., Goulart, P., Wynn, A.: Chordal decomposition in operator-splitting methods for sparse semidefinite programs. Math. Progr. (2019). https://doi.org/10.1007/s10107-019-01366-3

    Article  MATH  Google Scholar 

Download references

Acknowledgements

We are indebted to three anonymous referees for their careful reading and constructive suggestions that helped us improving the quality and readability of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Lodi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, D., Lodi, A. & Tanneau, M. Learning chordal extensions. J Glob Optim 81, 3–22 (2021). https://doi.org/10.1007/s10898-020-00973-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-020-00973-1

Keywords

Navigation