当前位置: X-MOL 学术ACM Trans. Priv. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial EXEmples
ACM Transactions on Privacy and Security ( IF 3.0 ) Pub Date : 2021-09-02 , DOI: 10.1145/3473039
Luca Demetrio 1 , Scott E. Coull 2 , Battista Biggio 3 , Giovanni Lagorio 4 , Alessandro Armando 4 , Fabio Roli 5
Affiliation  

Recent work has shown that adversarial Windows malware samples—referred to as adversarial EXE mples in this article—can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes. To preserve malicious functionality, previous attacks either add bytes to existing non-functional areas of the file, potentially limiting their effectiveness, or require running computationally demanding validation steps to discard malware variants that do not correctly execute in sandbox environments. In this work, we overcome these limitations by developing a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks based on practical, functionality-preserving manipulations to the Windows Portable Executable file format. These attacks, named Full DOS , Extend , and Shift , inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section. Our experimental results show that these attacks outperform existing ones in both white-box and black-box scenarios, achieving a better tradeoff in terms of evasion rate and size of the injected payload, while also enabling evasion of models that have been shown to be robust to previous attacks. To facilitate reproducibility of our findings, we open source our framework and all the corresponding attack implementations as part of the secml-malware Python library. We conclude this work by discussing the limitations of current machine learning-based malware detectors, along with potential mitigation strategies based on embedding domain knowledge coming from subject-matter experts directly into the learning process.

中文翻译:

对抗性示例

最近的工作表明,对抗性 Windows 恶意软件样本 — 称为对抗性EXE文件本文中的 mples 可以通过扰动相对较少的输入字节来绕过依赖于静态代码分析的基于机器学习的检测。为了保留恶意功能,以前的攻击要么将字节添加到文件的现有非功能区域,这可能会限制其有效性,要么需要运行计算要求高的验证步骤以丢弃在沙盒环境中无法正确执行的恶意软件变体。在这项工作中,我们通过开发一个统一的框架来克服这些限制,该框架不仅包含和概括以前针对机器学习模型的攻击,而且还包括三种基于对 Windows 可移植可执行文件格式的实用、保留功能的操作的新型攻击。这些攻击,名为完整的 DOS,延长, 和转移,通过分别操作 DOS 标头、扩展它并移动第一部分的内容来注入对抗性有效载荷。我们的实验结果表明,这些攻击在白盒和黑盒场景中都优于现有攻击,在规避率和注入有效负载的大小方面实现了更好的权衡,同时还能够规避已被证明是稳健的模型对以前的攻击。为了促进我们发现的可重复性,我们将我们的框架和所有相应的攻击实现作为 secml-malware Python 库的一部分开源。我们通过讨论当前基于机器学习的恶意软件检测器的局限性来结束这项工作,
更新日期:2021-09-02
down
wechat
bug