当前位置: X-MOL 学术Comput. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards an interpretable deep learning model for mobile malware detection and family identification
Computers & Security ( IF 5.6 ) Pub Date : 2021-01-17 , DOI: 10.1016/j.cose.2021.102198
Giacomo Iadarola , Fabio Martinelli , Francesco Mercaldo , Antonella Santone

Mobile devices are pervading everyday activities of our life. Each day we store a plethora of sensitive and private information in smart devices such as smartphones or tablets, which are typically equipped with an always-on internet connection. These information are of interest for malicious writers that are developing more and more aggressive harmful code for stealing sensitive and private information from mobile devices. Considering the weaknesses exhibited from current antimalware signature-based detection, in this paper we propose a method relying on application representation in terms on images used to input an explainable deep learning model designed by authors for Android malware detection and family identification. Moreover, we show how the explainability can be considered from the analyst to assess different models. Experimental results demonstrated the effectiveness of the proposed method, obtaining an average accuracy ranging from 0.96 to 0.97; we evaluated 8446 Android samples belonging to six different malware families and one more family for trusted samples, by providing also interpretability about the predictions performed by the model.



中文翻译:

迈向可解释的深度学习模型,用于移动恶意软件检测和家庭识别

移动设备正渗透到我们生活的日常活动中。每天,我们都会在智能设备(如智能手机或平板电脑)中存储大量敏感和私人信息,这些设备通常配备有始终在线的互联网连接。这些信息对于恶意编写者非常有用,他们正在开发越来越积极的有害代码,以从移动设备中窃取敏感和私人信息。考虑到当前基于反恶意软件签名的检测所表现出的弱点,在本文中,我们提出了一种基于应用程序表示的图像输入方法,该图像用于输入由作者设计用于Android恶意软件检测和家庭识别的可解释的深度学习模型。此外,我们展示了如何从分析师那里考虑可解释性以评估不同的模型。实验结果证明了该方法的有效性,平均准确度在0.96至0.97之间。我们还提供了模型执行的预测的可解释性,从而评估了8446个Android样本,这些样本属于6个不同的恶意软件家族,另外1个家族属于受信任的样本。

更新日期:2021-03-05
down
wechat
bug