Elsevier

Structural Safety

Volume 93, November 2021, 102117
Structural Safety

An augmented weighted simulation method for high-dimensional reliability analysis

https://doi.org/10.1016/j.strusafe.2021.102117Get rights and content

Highlights

  • A novel augmented weighted simulation method is proposed.

  • A new optimization method is proposed to compute the threshold values.

  • A space reduction strategy is developed.

  • The coefficient of variation of the augmented weighted simulation method is derived.

  • The accuracy of the proposed method is significantly improved.

Abstract

In the reliability analysis of mechanical systems, sampling method is widely used due to the universality and practicability. However, the computation of high-dimensional problems encounters tremendous numerical difficulties, especially when the performance function is highly nonlinear. In this study, an augmented weighted simulation method (AWSM) is proposed in order to tackle this difficulty. The basic idea of AWSM is introducing a series of intermediate events into weighted simulation method (WSM), in which a new optimization method is constructed to reasonably determine each intermediate event. In this way, the failure event is divided to a sequence of conditional events, and the failure probability is accordingly converted to the product of conditional probabilities. Furthermore, a space reduction strategy is proposed to increase the probability of the samples generated in each conditional event, which greatly improves the sampling efficiency. Also, the coefficient of variation of AWSM is derived. Two mathematical examples and four engineering examples are tested, and the results demonstrate the efficiency and accuracy of the proposed method for high-dimensional problems.

Introduction

Structural reliability refers to the capacity of the structure to meet the expected safety, applicability and durability under the specified conditions [1], [2]. It uses the failure probability to represent the safety level of the structural system [3], [4], which is defined asPf=Pg(x)0=g(x)0f(x)dxwhere x=[x1,x2,...,xn]T is the vector of random variables, f(x) is the joint probability density function (PDF) of x, g(x) is the performance function, and F=x:g(x)0 represents the failure event located in the failure domain.

The above failure probability can be solved by three different ways: approximate reliability calculation methods [5], [6], [7], moment methods [8], [9], [10] and sampling methods [11], [12], [13]. Approximate reliability calculation methods require evaluating the gradients of the performance function at the most probable point, in which the first order reliability method and second order reliability method [14], [15], [16] are two distinguished representatives. Both of them are commonly applied in mechanical systems because of the simplicity and efficiency. However, when the performance function is highly nonlinear with high-dimensional random variables, their results often encounter the inaccurate problems [17], [18].

Different from the approximate reliability calculation methods, the moment methods calculate the failure probability from the mathematical definition of the reliability [19], [20]. The basic idea of the moment methods is to obtain the statistical moments of the structural response according to the random information of input variables, and then the moment information are converted to the failure probability of the output response [21], [22]. Since the moment methods use the random information from limited samples, the computational results are susceptible to the position of the samples [23], [24].

Comparing with approximate reliability calculation methods and moment methods, sampling methods are insensitive to the types of random variables and the nonlinear degree of limit state function, which provide a universal tool for reliability assessment [25], [26]. Among them, Monte Carlo simulation (MCS) is the most well-known method for approximating the failure probability, and it is often used as a reference solution to test the accuracy of other methods [27]. However, it requires generating huge number of samples to ensure the computational accuracy, which leads to the unaffordable computational cost in practical application. To improve the computational efficiency of MCS, an active learning reliability method combining Kriging and Monte Carlo Simulation (AK-MCS) was proposed by Echard et al. [28] to construct a learning function to update the design of experiment, which improves the efficiency to a large extent. However, the Kriging model may lead to the inaccurate results for high-dimensional problems, especially involving nonlinear performance function [29], [30].

In addition, a series of advanced methods have been also developed, such as important sampling (IS) [31], [32], line sampling (LS) [33], [34] and subset simulation (SS) [35], [36]. IS moves the sampling center to the limit state surface, in this case, the sample has a high probability to fall into the vicinity of the limit state surface, which can promote the sampling efficiency to a certain extent [37], [38]. LS transforms a high-dimensional sampling problem into multiple one-dimensional conditional probability problems in the standard normal space, and the results indicate that the computational efficiency can be significantly enhanced comparing with MCS [39], [40]. SS reduces the computational cost by converting the small failure probability into the multiplication of large conditional probabilities, and thus it is effective to solve the small failure probability problem in time-consuming engineering applications [41], [42].

Recently, Rashki et al. [43] proposed a new weighed simulation method (WSM) by assigning the weight index for each sample, and the failure probability was defined as the ratio of the weight of samples falling into the failure domain to the weight of the total samples. The uniformly distributed samples can sufficiently cover the random space [44], and thus more samples can be generated in the failure domain for solving the high-dimensional problems. Okasha [45] extended the WSM into the field of reliability-based design optimization, in which the firefly algorithm was adopted to search the global optimum. Besides, Xu et al. [46] utilized the Voronoi cells to divide the random space into several sub-spaces, and the failure probability was evaluated by the weighted summation over each sub-space. However, the construction of high order Voronoi cells is difficult for high-dimensional problems [47].

In this paper, an augmented weighted simulation method (AWSM) is proposed to promote the computational accuracy and efficiency of WSM for high-dimensional reliability analysis. Firstly, the random space is divided into several subset failure domains by introducing intermediate events into WSM, which converts the failure probability into the product of a series of conditional probabilities. Secondly, the threshold values of the intermediate events are determined by constructing a new optimization formulation to ensure the rationality of the determination of the intermediate events. Thirdly, a space reduction strategy (SRS) is proposed by sufficient use of samples.

The remainder of this paper is organized as follows: Section 2 reviews the classical sampling methods, including MCS, SS and WSM. Section 3 describes the proposed AWSM in detail. Section 4 provides two mathematical examples and four engineering examples with high-dimensional input variables for demonstrating the validity of AWSM. The conclusions are drawn in Section 5.

Section snippets

Monte Carlo simulation

MCS generates a large number of samples in random space based on the joint PDF of random variables. The failure probability of MCS is defined as the ratio of the number of samples in the failure region F to the total number of samples, which is formulated asPfMCS=1Nj=1NIF(xj)where N is the total number of samples, IF(x) is the index function with respect to the performance function. If xjF, IF(xj)=1; otherwise, IF(xj)=0. Although MCS is very simple and universal, it always needs generating a

The proposed method

In this section, an augmented weighted simulation method (AWSM) is proposed to further improve the computational efficiency and accuracy of WSM, which contains five parts: the definition of AWSM, the determination of threshold values, SRS, the computation of coefficient of variation (COV) and the computational procedure of AWSM.

Demonstrative examples and comparative study

In this section, six numerical examples are given to test the efficiency and accuracy of AWSM. The results obtained by AWSM are verified by comparing with those of MCS, polynomial chaos expansion (PCE), dimension-reduction method (DRM), AK-MCS, SS, IS and WSM. The results of SS and IS are computed by UQLAB [59]. The number of samples in each layer of SS is 1000, and the conditional probability is 0.1. The maximum number of iterations is 10. The error is evaluated by the relative error ε=Pf-PfMCS

Conclusions

In this paper, an augmented weighted simulation method (AWSM) is proposed by introducing a series of reasonable intermediate events, in which a new optimization method is constructed by using the variance to obtain the threshold values reasonably and efficiently. Then, a space reduction strategy is developed to narrow each random sub-space, and thus the intermediate event can reuse the samples from the previous conditional events. In this case, the small failure probability is expressed as the

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The supports of the National Natural Science Foundation of China (Grant No. 11972143), the Fundamental Research Funds for the Central Universities of China (Grant Nos. JZ2020HGPA0112, JZ2020HGTA0080) and State Key Laboratory of Reliability and Intelligence of Electrical Equipment (Grant No. EERI_KF2020002) are much appreciated.

References (59)

  • Z. Zhang et al.

    First and second order approximate reliability analysis methods using evidence theory

    Reliab Eng Syst Safe

    (2015)
  • X. Zhang et al.

    Maximum entropy distribution with fractional moments for reliability analysis

    Struct Saf

    (2020)
  • W. He et al.

    A novel structural reliability analysis method via improved maximum entropy method based on nonlinear mapping and sparse grid numerical integration

    Mech Syst Signal Pr

    (2019)
  • D.V. Rosowsky

    Evolution of probabilistic analysis of timber structures from second-moment reliability methods to fragility analysis

    Struct Saf

    (2013)
  • Q. Pan et al.

    An efficient reliability method combining adaptive support vector machine and Monte Carlo simulation

    Struct Saf

    (2017)
  • R. Vaisman et al.

    Splitting sequential Monte Carlo for efficient unreliability estimation of highly reliable networks

    Struct Saf

    (2016)
  • H. Zhang et al.

    Interval Monte Carlo methods for structural reliability

    Struct Saf

    (2010)
  • B. Echard et al.

    AK-MCS: An active learning reliability method combining Kriging and Monte Carlo Simulation

    Struct Saf

    (2011)
  • J. Zhang et al.

    An active learning reliability method combining Kriging constructed with exploration and exploitation of failure region and subset simulation

    Reliab Eng Syst Safe

    (2019)
  • T. Zhou et al.

    Active learning and active subspace enhancement for PDEM-based high-dimensional reliability analysis

    Struct Saf

    (2021)
  • S. Geyer et al.

    Cross entropy-based importance sampling using Gaussian densities revisited

    Struct Saf

    (2019)
  • I. Papaioannou et al.

    Reliability sensitivity estimation with sequential importance sampling

    Struct Saf

    (2018)
  • I. Depina et al.

    Reliability analysis with Meta model line sampling

    Struct Saf

    (2016)
  • M. de Angelis et al.

    Advanced line sampling for efficient robust reliability analysis

    Struct Saf

    (2015)
  • F. Wang et al.

    Subset simulation for non-Gaussian dependent random variables given incomplete probability information

    Struct Saf

    (2017)
  • H. Dai et al.

    Wavelet density-based adaptive importance sampling method

    Struct Saf

    (2015)
  • Z. Lu et al.

    Reliability sensitivity method by line sampling

    Struct Saf

    (2008)
  • H.J. Pradlwarter et al.

    Application of line sampling simulation method to reliability benchmark problems

    Struct Saf

    (2007)
  • Z. Wang et al.

    Hamiltonian Monte Carlo methods for subset simulation in reliability analysis

    Struct Saf

    (2019)
  • Cited by (16)

    • Opposition-based learning equilibrium optimizer with Levy flight and evolutionary population dynamics for high-dimensional global optimization problems

      2023, Expert Systems with Applications
      Citation Excerpt :

      Finally, the conclusions and the future work are given in Section 6. High-dimensional global optimization problems involving a large number of design variables are widely existed in the science and engineering applications, such as Burro Creek arch bridge (Meng et al., 2021b), large-scale truss optimization design (Panagant et al., 2021). Because the searching space of design variables is exponentially enlarged with the increasing of dimension, most of metaheuristic algorithms may encounter performance degradation or low convergence in high-dimensional optimization problems.

    View all citing articles on Scopus
    View full text