当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Crafting Adversarial Examples for Deep Learning Based Prognostics (Extended Version)
arXiv - CS - Machine Learning Pub Date : 2020-09-21 , DOI: arxiv-2009.10149
Gautam Raj Mode, Khaza Anuarul Hoque

In manufacturing, unexpected failures are considered a primary operational risk, as they can hinder productivity and can incur huge losses. State-of-the-art Prognostics and Health Management (PHM) systems incorporate Deep Learning (DL) algorithms and Internet of Things (IoT) devices to ascertain the health status of equipment, and thus reduce the downtime, maintenance cost and increase the productivity. Unfortunately, IoT sensors and DL algorithms, both are vulnerable to cyber attacks, and hence pose a significant threat to PHM systems. In this paper, we adopt the adversarial example crafting techniques from the computer vision domain and apply them to the PHM domain. Specifically, we craft adversarial examples using the Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) and apply them on the Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Convolutional Neural Network (CNN) based PHM models. We evaluate the impact of adversarial attacks using NASA's turbofan engine dataset. The obtained results show that all the evaluated PHM models are vulnerable to adversarial attacks and can cause a serious defect in the remaining useful life estimation. The obtained results also show that the crafted adversarial examples are highly transferable and may cause significant damages to PHM systems.

中文翻译:

为基于深度学习的预测制作对抗性示例(扩展版)

在制造业中,意外故障被认为是主要的运营风险,因为它们会阻碍生产力并造成巨大损失。最先进的预测和健康管理 (PHM) 系统结合深度学习 (DL) 算法和物联网 (IoT) 设备来确定设备的健康状态,从而减少停机时间、维护成本并提高生产力. 不幸的是,物联网传感器和深度学习算法都容易受到网络攻击,因此对 PHM 系统构成重大威胁。在本文中,我们采用了计算机视觉领域的对抗性示例制作技术,并将其应用于 PHM 领域。具体来说,我们使用快速梯度符号方法 (FGSM) 和基本迭代方法 (BIM) 制作对抗性示例,并将它们应用于长短期记忆 (LSTM),门控循环单元 (GRU) 和基于卷积神经网络 (CNN) 的 PHM 模型。我们使用 NASA 的涡扇发动机数据集评估对抗性攻击的影响。获得的结果表明,所有评估的 PHM 模型都容易受到对抗性攻击,并可能导致剩余使用寿命估计出现严重缺陷。获得的结果还表明,精心制作的对抗样本具有高度的可转移性,可能会对 PHM 系统造成重大损害。
更新日期:2020-09-29
down
wechat
bug