当前位置: X-MOL 学术Eng. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An explainable prediction framework for engineering problems: case studies in reinforced concrete members modeling
Engineering Computations ( IF 1.6 ) Pub Date : 2021-07-07 , DOI: 10.1108/ec-02-2021-0096
Amirhessam Tahmassebi , Mehrtash Motamedi , Amir H. Alavi , Amir H. Gandomi

Purpose

Engineering design and operational decisions depend largely on deep understanding of applications that requires assumptions for simplification of the problems in order to find proper solutions. Cutting-edge machine learning algorithms can be used as one of the emerging tools to simplify this process. In this paper, we propose a novel scalable and interpretable machine learning framework to automate this process and fill the current gap.

Design/methodology/approach

The essential principles of the proposed pipeline are mainly (1) scalability, (2) interpretibility and (3) robust probabilistic performance across engineering problems. The lack of interpretibility of complex machine learning models prevents their use in various problems including engineering computation assessments. Many consumers of machine learning models would not trust the results if they cannot understand the method. Thus, the SHapley Additive exPlanations (SHAP) approach is employed to interpret the developed machine learning models.

Findings

The proposed framework can be applied to a variety of engineering problems including seismic damage assessment of structures. The performance of the proposed framework is investigated using two case studies of failure identification in reinforcement concrete (RC) columns and shear walls. In addition, the reproducibility, reliability and generalizability of the results were validated and the results of the framework were compared to the benchmark studies. The results of the proposed framework outperformed the benchmark results with high statistical significance.

Originality/value

Although, the current study reveals that the geometric input features and reinforcement indices are the most important variables in failure modes detection, better model can be achieved with employing more robust strategies to establish proper database to decrease the errors in some of the failure modes identification.



中文翻译:

工程问题的可解释预测框架​​:钢筋混凝土构件建模中的案例研究

目的

工程设计和运营决策很大程度上取决于对应用程序的深入理解,这些应用程序需要假设以简化问题,以便找到适当的解决方案。尖端的机器学习算法可以用作简化这一过程的新兴工具之一。在本文中,我们提出了一种新颖的可扩展和可解释的机器学习框架来自动化这个过程并填补当前的空白。

设计/方法/方法

拟议管道的基本原则主要是(1)可扩展性,(2)可解释性和(3)跨工程问题的稳健概率性能。复杂机器学习模型缺乏可解释性阻止了它们在包括工程计算评估在内的各种问题中的使用。如果机器学习模型的许多消费者无法理解该方法,他们将不会相信结果。因此,使用 SHapley Additive exPlanations (SHAP) 方法来解释开发的机器学习模型。

发现

所提出的框架可以应用于各种工程问题,包括结构的地震损伤评估。使用钢筋混凝土 (RC) 柱和剪力墙失效识别的两个案例研究来研究所提出框架的性能。此外,验证了结果的可重复性、可靠性和普遍性,并将框架的结果与基准研究进行了比较。所提出框架的结果以高统计显着性优于基准结果。

原创性/价值

尽管目前的研究表明几何输入特征和加固指数是故障模式检测中最重要的变量,但可以通过采用更稳健的策略建立适当的数据库来减少某些故障模式识别中的错误,从而获得更好的模型。

更新日期:2021-07-07
down
wechat
bug