当前位置: X-MOL 学术J. Inf. Secur. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems
Journal of Information Security and Applications ( IF 5.6 ) Pub Date : 2021-02-02 , DOI: 10.1016/j.jisa.2020.102717
Eirini Anthi , Lowri Williams , Matilda Rhode , Pete Burnap , Adam Wedgbury

The proliferation and application of machine learning-based Intrusion Detection Systems (IDS) have allowed for more flexibility and efficiency in the automated detection of cyber attacks in Industrial Control Systems (ICS). However, the introduction of such IDSs has also created an additional attack vector; the learning models may also be subject to cyber attacks, otherwise referred to as Adversarial Machine Learning (AML). Such attacks may have severe consequences in ICS systems, as adversaries could potentially bypass the IDS. This could lead to delayed attack detection which may result in infrastructure damages, financial loss, and even loss of life. This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples using the Jacobian-based Saliency Map attack and exploring classification behaviours. The analysis also includes the exploration of how such samples can support the robustness of supervised models using adversarial training. An authentic power system dataset was used to support the experiments presented herein. Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 6 and 11 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks.



中文翻译:

工业控制系统中机器学习网络安全防御的对抗性攻击

基于机器学习的入侵检测系统(IDS)的激增和应用为工业控制系统(ICS)中的网络攻击自动检测提供了更大的灵活性和效率。但是,这种入侵检测系统的引入也创造了一个额外的攻击媒介。学习模型也可能受到网络攻击,否则称为对抗机器学习(AML)。由于攻击者可能会绕过IDS,因此此类攻击可能会对ICS系统造成严重后果。这可能导致延迟的攻击检测,从而可能导致基础架构损坏,财务损失甚至生命损失。本文探讨了如何使用基于雅可比的显着性图攻击生成对抗性样本并探索分类行为,从而将对抗性学习用于定向监督模型。分析还包括探索如何利用对抗训练来支持此类样本的鲁棒性。真实的电源系统数据集用于支持本文介绍的实验。总体而言,当存在对抗性样本时,两个广泛使用的分类器,Random Forest和J48的分类性能分别下降了6和11个百分点。在对抗训练后,他们的表现有所提高,证明了其对此类攻击的鲁棒性。分析还包括探索如何利用对抗训练来支持此类样本的鲁棒性。真实的电源系统数据集用于支持本文介绍的实验。总体而言,当存在对抗性样本时,两个广泛使用的分类器,Random Forest和J48的分类性能分别下降了6和11个百分点。在对抗训练后,他们的表现有所提高,证明了其对此类攻击的鲁棒性。分析还包括探索如何利用对抗训练来支持此类样本的鲁棒性。真实的电源系统数据集用于支持本文介绍的实验。总体而言,当存在对抗性样本时,两个广泛使用的分类器,Random Forest和J48的分类性能分别下降了6和11个百分点。在对抗训练后,他们的表现有所提高,证明了其对此类攻击的鲁棒性。

更新日期:2021-02-03
down
wechat
bug