当前位置: X-MOL 学术Appl. Soft Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Local Latin hypercube refinement for multi-objective design uncertainty optimization
Applied Soft Computing ( IF 8.7 ) Pub Date : 2021-08-16 , DOI: 10.1016/j.asoc.2021.107807
Can Bogoclu 1, 2 , Dirk Roos 2 , Tamara Nestorović 1
Affiliation  

Optimizing the reliability and the robustness of a design is important but often unaffordable due to high sample requirements. Surrogate models based on statistical and machine learning methods are used to increase the sample efficiency. However, for higher dimensional or multi-modal systems, surrogate models may also require a large amount of samples to achieve good results. We propose a sequential sampling strategy for the surrogate based solution of multi-objective reliability based robust design optimization problems. Proposed local Latin hypercube refinement (LoLHR) strategy is model-agnostic and can be combined with any surrogate model because there is no free lunch but possibly a budget one. The proposed method is compared to stationary sampling as well as other proposed strategies from the literature. Gaussian process and support vector regression are both used as surrogate models. Empirical evidence is presented, showing that LoLHR achieves on average better results compared to other surrogate based strategies on the tested examples.



中文翻译:

用于多目标设计不确定性优化的局部拉丁超立方体细化

优化设计的可靠性和稳健性很重要,但由于对样品的要求很高,因此通常负担不起。使用基于统计和机器学习方法的代理模型来提高样本效率。然而,对于更高维或多模态的系统,代理模型也可能需要大量的样本才能获得好的结果。我们为基于多目标可靠性的鲁棒设计优化问题的基于代理的解决方案提出了一种顺序采样策略。提议的本地拉丁超立方体细化 (LoLHR) 策略与模型无关,可以与任何替代模型结合使用,因为没有免费午餐,但可能有预算。将所提出的方法与静态采样以及文献中的其他提出的策略进行比较。高斯过程和支持向量回归都用作代理模型。提供了实证证据,表明与其他基于代理的策略相比,LoLHR 在测试示例上平均取得了更好的结果。

更新日期:2021-08-27
down
wechat
bug