当前位置: X-MOL 学术IEEE Trans. Dependable Secure Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning
IEEE Transactions on Dependable and Secure Computing ( IF 7.0 ) Pub Date : 2020-01-01 , DOI: 10.1109/tdsc.2020.3024191
Mojan Javaheripi 1 , Mohammad Samragh 1 , Bita Darvish Rouhani 1 , Tara Javidi 1 , Farinaz Koushanfar 1
Affiliation  

This paper proposes CuRTAIL, a novel end-to-end computing framework for characterizing and thwarting adversarial space in the context of Deep Learning (DL). The framework protects deep neural networks against adversarial samples, which are perturbed inputs carefully crafted by malicious entities to mislead the underlying DL model. The precursor for the proposed methodology is a set of new quantitative metrics to assess the vulnerability of various deep learning architectures to adversarial samples. CuRTAIL formalizes the goal of preventing adversarial samples as a minimization of the space unexplored by the pertinent DL model that is characterized in CuRTAIL vulnerability analysis step. To thwart the adversarial machine learning attack, CuRTAIL introduces the concept of Modular Robust Redundancy (MRR) as a viable solution to achieve the formalized minimization objective. The MRR methodology explicitly characterizes the geometry of the input data and the DL model parameters. It then learns a set of complementary but disjoint models which maximally cover the unexplored subspaces of the target DL model, thus reducing the risk of integrity attacks. We extensively evaluate CuRTAIL performance against the stateof-the-art attack models including fast-sign-gradient, Jacobian Saliency Map Attack, and Deepfool. Proof-of-concept implementations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate CuRTAIL effectiveness to detect adversarial samples in different settings. The computations in each MRR module can be performed independent of the other redundancy modules. As such, CuRTAIL detection algorithm can be completely parallelized among multiple hardware settings to achieve maximum throughput. The execution overhead of each MRR module is the same as that of the main DL model. We further provide an accompanying automated Application Programming Interface (API) to facilitate the adoption of the proposed framework for various applications.

中文翻译:

CuRTAIL:表征和阻碍对抗性深度学习

本文提出了 CuRTAIL,这是一种新颖的端到端计算框架,用于在深度学习 (DL) 的背景下表征和阻止对抗空间。该框架保护深度神经网络免受对抗样本的影响,对抗样本是恶意实体精心制作的扰动输入,以误导底层 DL 模型。所提出方法的前身是一组新的量化指标,用于评估各种深度学习架构对对抗样本的脆弱性。CuRTAIL 将防止对抗样本的目标正式化为最小化相关 DL 模型未探索的空间,该模型在 CuRTAIL 漏洞分析步骤中具有特征。为了阻止对抗性机器学习攻击,CuRTAIL 引入了模块化鲁棒冗余 (MRR) 的概念,作为实现形式化最小化目标的可行解决方案。MRR 方法明确表征输入数据的几何形状和 DL 模型参数。然后学习一组互补但不相交的模型,最大限度地覆盖目标 DL 模型的未探索子空间,从而降低完整性攻击的风险。我们针对最先进的攻击模型(包括快速符号梯度、雅可比显着图攻击和 Deepfool)广泛评估 CuRTAIL 性能。用于分析各种数据集合(包括 MNIST、CIFAR10 和 ImageNet)的概念验证实现证实了 CuRTAIL 在不同设置中检测对抗样本的有效性。每个 MRR 模块中的计算可以独立于其他冗余模块执行。因此,CuRTAIL 检测算法可以在多个硬件设置之间完全并行,以实现最大吞吐量。每个MRR模块的执行开销与主DL模型的执行开销相同。我们进一步提供了一个随附的自动化应用程序编程接口 (API),以促进对各种应用程序采用建议的框架。
更新日期:2020-01-01
down
wechat
bug