当前位置: X-MOL 学术Adv. Eng. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A novel deep learning prediction model for concrete dam displacements using interpretable mixed attention mechanism
Advanced Engineering Informatics ( IF 8.0 ) Pub Date : 2021-09-07 , DOI: 10.1016/j.aei.2021.101407
Qiubing Ren 1 , Mingchao Li 1 , Heng Li 2 , Yang Shen 3
Affiliation  

Dam displacements can effectively reflect its operational status, and thus establishing a reliable displacement prediction model is important for dam health monitoring. The majority of the existing data-driven models, however, focus on static regression relationships, which cannot capture the long-term temporal dependencies and adaptively select the most relevant influencing factors to perform predictions. Moreover, the emerging modeling tools such as machine learning (ML) and deep learning (DL) are mostly black-box models, which makes their physical interpretation challenging and greatly limits their practical engineering applications. To address these issues, this paper proposes an interpretable mixed attention mechanism long short-term memory (MAM-LSTM) model based on an encoder-decoder architecture, which is formulated in two stages. In the encoder stage, a factor attention mechanism is developed to adaptively select the highly influential factors at each time step by referring to the previous hidden state. In the decoder stage, a temporal attention mechanism is introduced to properly extract the key time segments by identifying the relevant hidden states across all the time steps. For interpretation purpose, our emphasis is placed on the quantification and visualization of factor and temporal attention weights. Finally, the effectiveness of the proposed model is verified using monitoring data collected from a real-world dam, where its accuracy is compared to a classical statistical model, conventional ML models, and homogeneous DL models. The comparison demonstrates that the MAM-LSTM model outperforms the other models in most cases. Furthermore, the interpretation of global attention weights confirms the physical rationality of our attention-based model. This work addresses the research gap in interpretable artificial intelligence for dam displacement prediction and delivers a model with both high-accuracy and interpretability.



中文翻译:

一种使用可解释混合注意机制的混凝土大坝位移的新型深度学习预测模型

大坝位移可以有效反映其运行状态,因此建立可靠的位移预测模型对于大坝健康监测具有重要意义。然而,大多数现有的数据驱动模型专注于静态回归关系,无法捕捉长期的时间依赖性并自适应地选择最相关的影响因素来执行预测。此外,机器学习(ML)和深度学习(DL)等新兴建模工具大多是黑盒模型,这使得它们的物理解释具有挑战性,极大地限制了它们的实际工程应用。为了解决这些问题,本文提出了一种基于编码器-解码器架构的可解释混合注意机制长短期记忆 (MAM-LSTM) 模型,该模型分为两个阶段。在编码器阶段,开发了一种因子注意机制,通过参考先前的隐藏状态,在每个时间步自适应地选择影响较大的因子。在解码器阶段,引入时间注意力机制通过识别所有时间步长的相关隐藏状态来正确提取关键时间段。出于解释目的,我们的重点放在因子和时间注意力权重的量化和可视化上。最后,使用从真实世界大坝收集的监测数据验证了所提出模型的有效性,将其准确性与经典统计模型、传统 ML 模型和同质 DL 模型进行了比较。比较表明 MAM-LSTM 模型在大多数情况下优于其他模型。此外,全局注意力权重的解释证实了我们基于注意力的模型的物理合理性。这项工作解决了用于大坝位移预测的可解释人工智能的研究空白,并提供了一个具有高精度和可解释性的模型。

更新日期:2021-09-08
down
wechat
bug