Skip to main content
Log in

An adaptive direct multisearch method for black-box multi-objective optimization

  • Research Article
  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

At present, black-box and simulation-based optimization problems with multiple objective functions are becoming increasingly common in the engineering context. In many cases, the functional relationships that define the objective and constraints are only known as black-boxes, cannot be differentiated accurately, and may be subject to unexpected failures. Directional direct search techniques, in particular the direct multisearch (DMS) methodology, may be applied to identify Pareto fronts for such problems. In this work, we propose a mechanism for adaptively selecting search directions in the DMS framework, with the goal of reducing the number of black-box evaluations required during the optimization. Our method relies on the concept of simplex derivatives in order to define search directions that are optimal for a local, linear model of the objective function. We provide a detailed description of the resulting algorithm and offer several practical recommendations for efficiently solving the associated subproblems. The overall performance in an academic context is assessed via a standard benchmark. Through a realistic case study, involving the bi-objective design optimization of a mechatronic quarter-car suspension, the performance of the novel method in a multidisciplinary engineering setting is tested. The results show that our method is competitive with standard implementations of DMS and other state-of-the-art multi-objective direct search methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Strictly speaking, this is a deviation from the default settings for these algorithms.

References

  • Audet C, Hare W (2017) Derivative-free and blackbox optimization, 1st edn. Springer Series in Operations Research and Financial Engineering, Springer, Cham

    Book  Google Scholar 

  • Audet C, Savard G, Zghal W (2008) Multiobjective optimization through a series of single-objective formulations. SIAM J Optim 19(1):188–210

    Article  MathSciNet  Google Scholar 

  • Audet C, Savard G, Zghal W (2010) A mesh adaptive direct search algorithm for multiobjective optimization. Eur J Oper Res 204(3):545–556

    Article  MathSciNet  Google Scholar 

  • Audet C, Bigeon J, Cartier D, Le Digabel S, Salomon L (2021) Performance indicators in multiobjective optimization. Eur J Oper Res 292(2):397–422

    Article  MathSciNet  Google Scholar 

  • Campana EF, Diez M, Liuzzi G, Lucidi S, Pellegrini R, Piccialli V, Rinaldi F, Serani A (2018) A multi-objective DIRECT algorithm for ship hull optimization. Comput Optim Appl 71(1):53–72

    Article  MathSciNet  Google Scholar 

  • Cocchi G, Liuzzi G, Papini A, Sciandrone M (2018) An implicit filtering algorithm for derivative-free multiobjective optimization with box constraints. Comput Optim Appl 69(2):267–296

    Article  MathSciNet  Google Scholar 

  • Coello Coello CA, Lechuga MS (2002) MOPSO: a proposal for multiple objective particle swarm optimization. In: Proceedings of the 2002 congress on evolutionary computation. IEEE, vol 2, pp 1051–1056

  • Coello Coello CA, Lamont GB, van Veldhuizen DA (2007) Evolutionary algorithms for solving multi-objective problems, 2nd edn. Springer, New York

    MATH  Google Scholar 

  • Conn AR, Scheinberg K, Vicente LN (2008) Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation. IMA J Numer Anal 28(4):721–748

    Article  MathSciNet  Google Scholar 

  • Conn AR, Scheinberg K, Vicente LN (2009) Introduction to derivative-free optimization. Society for Industrial and Applied Mathematics, Philadelphia

    Book  Google Scholar 

  • Custódio AL, Madeira JFA (2018) MultiGLODS: global and local multiobjective optimization using direct search. J Glob Optim 72(2):323–345

    Article  MathSciNet  Google Scholar 

  • Custódio AL, Vicente LN (2007) Using sampling and simplex derivatives in pattern search methods. SIAM J Optim 18(2):537–555

    Article  MathSciNet  Google Scholar 

  • Custódio AL, Madeira JFA, Vaz AIF, Vicente LN (2011) Direct multisearch for multiobjective optimization. SIAM Journal on Optimization 21(3):1109–1140, errata at http://www.mat.uc.pt/~lnv/papers/errata-dms.pdf

  • Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197

    Article  Google Scholar 

  • Dedoncker S, Desmet W, Naets F (2021) Generating set search using simplex gradients for bound-constrained black-box optimization. Comput Optim Appl 79(1):35–65

    Article  MathSciNet  Google Scholar 

  • Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91(2):201–213

    Article  MathSciNet  Google Scholar 

  • Fliege J, Svaiter BF (2000) Steepest descent methods for multicriteria optimization. Math Methods Oper Res (ZOR) 51(3):479–494

    Article  MathSciNet  Google Scholar 

  • Frimannslund L, Steihaug T (2007) A generating set search method using curvature information. Comput Optim Appl 38(1):105–121

    Article  MathSciNet  Google Scholar 

  • Kelley CT (2011) Implicit filtering. Society for Industrial and Applied Mathematics, Philadelphia

    Book  Google Scholar 

  • Kolda TG, Lewis RM, Torczon V (2003) Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev 45(3):385–482

    Article  MathSciNet  Google Scholar 

  • Liuzzi G, Lucidi S, Rinaldi F (2016) A derivative-free approach to constrained multiobjective nonsmooth optimization. SIAM J Optim 26(4):2744–2774

    Article  MathSciNet  Google Scholar 

  • Marler RT, Arora JS (2004) Survey of multi-objective optimization methods for engineering. Struct Multidiscip Optim 26(6):369–395

    Article  MathSciNet  Google Scholar 

  • Miettinen K (1998) Nonlinear multiobjective optimization. Springer, US

    Book  Google Scholar 

  • Ryu JH, Kim S (2014) A derivative-free trust-region method for biobjective optimization. SIAM J Optim 24(1):334–362

    Article  MathSciNet  Google Scholar 

  • Suppapitnarm A, Seffen KA, Parks GT, Clarkson P (2000) A simulated annealing algorithm for multiobjective optimization. Eng Optim 33(1):59–85

    Article  Google Scholar 

  • Wong JY (2008) Theory of ground vehicles, 4th edn. Wiley, New York

    Google Scholar 

  • Zapotecas-Martínez S, Coello Coello CA (2016) MONSS: a multi-objective nonlinear simplex search approach. Eng Optim 48(1):16–38

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Research Fund KU Leuven and by Flanders Make, the strategic research center for the manufacturing industry.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sander Dedoncker.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Technical results used in the proof of Theorem 2

Technical results used in the proof of Theorem 2

The first step is to recognize that the distance function has the following subadditivity property.

Lemma 1

(Subadditivity of the distance function) Let \({\mathcal {W}}_1,{\mathcal {W}}_2 \subset {\mathbb {R}}^n\) be closed, and let \(\mathbf {w} \in {\mathbb {R}}^n\) be an arbitrary vector. Then

$$\begin{aligned} {\text {dist}}\left( {\mathcal {W}}_1,{\mathcal {W}}_2\right) \le {\text {dist}}\left( \mathbf {w},{\mathcal {W}}_1\right) + {\text {dist}}\left( \mathbf {w},{\mathcal {W}}_2\right) . \end{aligned}$$
(40)

Proof

For arbitrary \(\mathbf {w}_1 \in {\mathcal {W}}_1, \mathbf {w}_2 \in {\mathcal {W}}_2\), and \(\mathbf {w} \in {\mathbb {R}}^n\) we have

$$\begin{aligned} {\text {dist}}\left( {\mathcal {W}}_1,{\mathcal {W}}_2\right)&\le ||\mathbf {w}_1-\mathbf {w}_2||\\&\le ||\mathbf {w}_1-\mathbf {w}|| + ||\mathbf {w}_2-\mathbf {w}|| \end{aligned}$$

by the triangle inequality. Since this holds for any \(\mathbf {w}_1\) and \(\mathbf {w}_2\), it must also be the case that

$$\begin{aligned} {\text {dist}}\left( {\mathcal {W}}_1,{\mathcal {W}}_2\right)&\le \min _{\mathbf {w}_1 \in {\mathcal {W}}_1} ||\mathbf {w}_1-\mathbf {w}|| + \min _{\mathbf {w}_2 \in {\mathcal {W}}_2}||\mathbf {w}_2-\mathbf {w}||\\&= {\text {dist}}\left( \mathbf {w},{\mathcal {W}}_1\right) + {\text {dist}}\left( \mathbf {w},{\mathcal {W}}_2\right) . \end{aligned}$$

\(\square\)

The next lemma is used to show that a specific hyperplane is supporting for a convex set.

Lemma 2

(Supporting hyperplane for convex sets) Let \({\mathcal {W}} \subset {\mathbb {R}}^n\) be closed and convex. Let \(\mathbf {w}^*\) be the minimizer

$$\begin{aligned} \mathbf {w}^*= \arg \min _{\mathbf {w}\in {\mathcal {W}}}||\mathbf {w}||. \end{aligned}$$

Then, for arbitrary \(\mathbf {w}\in {\mathcal {W}}\), the vector \(\mathbf {w}^*\) defines a supporting hyperplane:

$$\begin{aligned} \mathbf {w}^T\mathbf {w}^*\ge \mathbf {w}^{*T}{\mathbf {w}}^*. \end{aligned}$$

Proof

Since the norm minimization is a convex problem, \(\mathbf {w}^*\) exists and is a unique minimizer. By convexity, for any \(\mathbf {w}\in {\mathcal {W}}\) and \(\alpha \in [0,1]\) we have

$$\begin{aligned} \mathbf {w}^*+ \alpha (\mathbf {w}-\mathbf {w}^*) \in {\mathcal {W}}. \end{aligned}$$

The definition of \(\mathbf {w}^*\) implies

$$\begin{aligned} ||\mathbf {w}^*||^2&\le ||\mathbf {w}^*+ \alpha (\mathbf {w}-\mathbf {w}^*)||^2 \\&= ||\mathbf {w}^*||^2 + 2\alpha \mathbf {w}^{*T}(\mathbf {w}-\mathbf {w}^*) + \alpha ^2||\mathbf {w}-\mathbf {w}^*||^2. \end{aligned}$$

Hence

$$\begin{aligned} 0 \le 2\mathbf {w}^{*T}(\mathbf {w}-\mathbf {w}^*) + \alpha ||\mathbf {w}-\mathbf {w}^*||^2 \end{aligned}$$

as long as \(\alpha > 0\). Taking the limit \(\alpha \rightarrow 0\), we find

$$\begin{aligned} \mathbf {w}^T\mathbf {w}^*\ge \mathbf {w}^{*T}\mathbf {w}^*. \end{aligned}$$

\(\square\)

Intuitively, one might expect the optimal separation of two convex sets to occur when they are diametrically opposed across the origin. Such a situation indeed achieves an equality of the Eq. (40), as the next lemma demonstrates.

Lemma 3

(Distance between diametrically opposed convex sets) Let \({\mathcal {W}}_1,{\mathcal {W}}_2 \subset {\mathbb {R}}^n\) be closed and convex. Let \(\mathbf {w}_1^*\) and \(\mathbf {w}_2^*\) be the respective minimizers

$$\begin{aligned} \mathbf {w}_1^*= \arg \min _{\mathbf {w}_1\in {\mathcal {W}}_1}||\mathbf {w}_1|| \quad \text {and} \quad \mathbf {w}_2^*= \arg \min _{\mathbf {w}_2\in {\mathcal {W}}_2}||\mathbf {w}_2||. \end{aligned}$$

Suppose \(\mathbf {w}_1^*,\mathbf {w}_2^*\ne \mathbf {0}\) and that \(\mathbf {w}_1^*= -\alpha \mathbf {w}_2^*\) with \(\alpha > 0\). Then

$$\begin{aligned} {\text {dist}}\left( {\mathcal {W}}_1,{\mathcal {W}}_2\right) = {\text {dist}}\left( \mathbf {0},{\mathcal {W}}_1\right) + {\text {dist}}\left( \mathbf {0},{\mathcal {W}}_2\right) . \end{aligned}$$

Proof

For arbitrary \(\mathbf {w}_1 \in {\mathcal {W}}_1, \mathbf {w}_2 \in {\mathcal {W}}_2\) we have

$$\begin{aligned} ||\mathbf {w}_1^*||\,||\mathbf {w}_1-\mathbf {w}_2||&\ge \mathbf {w}_1^{*T}(\mathbf {w}_1-\mathbf {w}_2) \qquad \qquad \text {Cauchy-Schwarz inequality} \\&= \mathbf {w}_1^{*T}\mathbf {w}_1+\alpha \mathbf {w}_2^{*T}\mathbf {w}_2 \qquad {\mathbf {w}_1^*= -\alpha \mathbf {w}_2^*} \\&\ge \mathbf {w}_1^{*T}\mathbf {w}_1^*+\alpha \mathbf {w}_2^{*T}\mathbf {w}_2^*\qquad {\text{Lemma}}\, 2, \alpha > 0 \\&= ||\mathbf {w}_1^*||^2+\alpha ||\mathbf {w}_2^*||^2. \end{aligned}$$

Therefore,

$$\begin{aligned} ||\mathbf {w}_1-\mathbf {w}_2|| \ge ||\mathbf {w}_1^*||+||\mathbf {w}_2^*|| = ||\mathbf {w}_1^*- \mathbf {w}_2^*||, \end{aligned}$$

since the minimizers are aligned in opposite directions. Because the inequality holds for arbitrary \(\mathbf {w}_1\) and \(\mathbf {w}_2\), and because of the definition of \(\mathbf {w}_1^*\) and \(\mathbf {w}_2^*\), we finally obtain

$$\begin{aligned} {\text {dist}}\left( {\mathcal {W}}_1,{\mathcal {W}}_2\right) = ||\mathbf {w}_1^*- \mathbf {w}_2^*|| = ||\mathbf {w}_1^*||+||\mathbf {w}_2^*|| = {\text {dist}}\left( \mathbf {0},{\mathcal {W}}_1\right) + {\text {dist}}\left( \mathbf {0},{\mathcal {W}}_2\right) . \end{aligned}$$

\(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dedoncker, S., Desmet, W. & Naets, F. An adaptive direct multisearch method for black-box multi-objective optimization. Optim Eng 23, 1411–1437 (2022). https://doi.org/10.1007/s11081-021-09657-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-021-09657-5

Keywords

Navigation