当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial ELF Malware Detection Method Using Model Interpretation
IEEE Transactions on Industrial Informatics ( IF 12.3 ) Pub Date : 2022-07-21 , DOI: 10.1109/tii.2022.3192901
Yanchen Qiao 1 , Weizhe Zhang 1 , Zhicheng Tian 2 , Laurence T. Yang 3 , Yang Liu 2 , Mamoun Alazab 4
Affiliation  

Recent research shows that executable and linkable format (ELF) malware detection models based on deep learning are vulnerable to adversarial attacks. The most commonly used method in previous work is adversarial training to defend adversarial examples. Nevertheless, it is inefficient and only effective for specific adversarial attacks. Given that the perturbation byte insertion positions of existing adversarial malware generation methods are relatively fixed, we propose a new method to detect adversarial ELF malware. Using model interpretation techniques, we analyze the decision-making basis of the malware detection model and extract the features of adversarial examples. We further use anomaly detection techniques to identify adversarial examples. As an add-on module of the malware detection model, the proposed method does not require modifying the original model and does not need to retrain the model. Evaluating results show that the method can effectively defend the adversarial attacks against the malware detection model.

中文翻译:

使用模型解释的对抗性 ELF 恶意软件检测方法

最近的研究表明,基于深度学习的可执行和可链接格式 (ELF) 恶意软件检测模型容易受到对抗性攻击。以前工作中最常用的方法是对抗性训练来保护对抗性示例。然而,它效率低下,仅对特定的对抗性攻击有效。鉴于现有对抗性恶意软件生成方法的扰动字节插入位置相对固定,我们提出了一种检测对抗性 ELF 恶意软件的新方法。使用模型解释技术,我们分析了恶意软件检测模型的决策基础,并提取了对抗样本的特征。我们进一步使用异常检测技术来识别对抗样本。作为恶意软件检测模型的附加模块,所提出的方法不需要修改原始模型,也不需要重新训练模型。评估结果表明,该方法可以有效防御恶意软件检测模型的对抗性攻击。
更新日期:2022-07-21
down
wechat
bug