当前位置: X-MOL 学术Struct. Multidisc. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Topology optimization under uncertainty using a stochastic gradient-based approach
Structural and Multidisciplinary Optimization ( IF 3.6 ) Pub Date : 2020-09-26 , DOI: 10.1007/s00158-020-02599-z
Subhayan De , Jerrad Hampton , Kurt Maute , Alireza Doostan

Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and their gradients, e.g., via adjoint calculations. When the number of uncertain inputs is large or the quantities of interest exhibit large variability, a large number of adjoint (and/or forward) solves may be required to ensure the accuracy of these gradients. The optimization procedure itself often requires a large number of iterations, which may render TOuU computationally expensive, if not infeasible. To tackle this difficulty, we here propose an optimization approach that generates a stochastic approximation of the objective, constraints, and their gradients via a small number of adjoint (and/or forward) solves, per optimization iteration. A statistically independent (stochastic) approximation of these quantities is generated at each optimization iteration. The total cost of this approach is only a small factor larger than that of the corresponding deterministic topology optimization problem. We incorporate the stochastic approximation of objective, constraints, and their design sensitivities into two classes of optimization algorithms. First, we investigate the stochastic gradient descent (SGD) method and a number of its variants, which have been successfully applied to large-scale optimization problems for machine learning. Second, we study the use of the proposed stochastic approximation approach within conventional nonlinear programming methods, focusing on the globally convergent method of moving asymptotes (GCMMA). The performance of these algorithms is investigated with structural design optimization problems utilizing a solid isotropic material with penalization (SIMP), as well as an explicit level set method. These investigations, conducted on both two- and three-dimensional structures, illustrate the efficacy of the proposed stochastic gradient approach for TOuU applications.



中文翻译:

使用基于随机梯度的方法在不确定性下进行拓扑优化

不确定性下的拓扑优化(TOuU)通常通过感兴趣的几何和物理量的统计矩来定义目标和约束。大多数传统的TOuU方法使用基于梯度的优化算法,并依赖于统计矩及其梯度的准确估计,例如通过伴随计算。当不确定输入的数量很大或感兴趣的数量显示较大的可变性时,可能需要大量伴随(和/或正向)求解,以确保这些梯度的准确性。优化过程本身通常需要大量的迭代,如果不可行的话,可能会使TOuU在计算上变得昂贵。为解决这一难题,我们在这里提出一种优化方法,该方法可以生成目标,约束,每次优化迭代时,通过少量伴随(和/或正向)解算它们的梯度。这些数量的统计独立(随机)近似值在每次优化迭代时生成。这种方法的总成本仅比相应的确定性拓扑优化问题的总成本小一倍。我们将目标,约束及其对设计的敏感性的随机逼近合并到两类优化算法中。首先,我们研究了随机梯度下降(SGD)方法及其许多变体,这些方法已成功地应用于大规模的机器学习优化问题。其次,我们研究了在常规非线性规划方法中使用拟议的随机逼近方法,专注于全球渐近渐近移动渐近线方法(GCMMA)。研究了这些算法的性能,并利用带有罚分的固体各向同性材料(SIMP)以及明确的水平集方法对结构设计优化问题进行了研究。在二维和三维结构上进行的这些研究说明了提出的随机梯度方法在TOuU应用中的功效。

更新日期:2020-09-26
down
wechat
bug