当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DEAttack: A differential evolution based attack method for the robustness evaluation of medical image segmentation
Neurocomputing ( IF 6 ) Pub Date : 2021-09-03 , DOI: 10.1016/j.neucom.2021.08.118
Xiangxiang Cui 1 , Shi Chang 2 , Chen Li 1 , Bin Kong 3 , Lihua Tian 1 , Hongqiang Wang 1 , Peng Huang 2 , Meng Yang 4 , Yenan Wu 4 , Zhongyu Li 1
Affiliation  

Deep learning is an effective tool to assist doctors with many time-consuming and error-prone medical image analytical tasks. However, deep models are shown to be vulnerable to adversarial attacks, posing significant challenges to clinical applications. Existing works regarding the robustness of deep learning models are scarce, where most of them focus on the attack of medical image classification models. In this paper, a differential evolution attack (DEAttack) method is proposed to generate adversarial examples for medical image segmentation models. Our method does not require extra information such as the network’s structures and weights compared with the most widely investigated gradient-based attack methods. Additionally, benefit from the embedded differential evolution algorithm, which can preserve diversities of the optimization space. The proposed method can achieve better results than gradient-based methods, which can successfully attack the segmentation model with only perturbing a small fraction of the image pixels, demonstrating that the medical image segmentation model is more susceptible to adversarial examples. In addition to evaluating model robustness attack with public datasets, our DEAttack method was also tested on the clinical diagnostic dataset, demonstrating its superior performance and elegant process for the robustness evaluation of deep models in medical image segmentation.



中文翻译:

DEAttack:一种基于差分进化的医学图像分割鲁棒性评估攻击方法

深度学习是帮助医生完成许多耗时且容易出错的医学图像分析任务的有效工具。然而,深度模型被证明容易受到对抗性攻击,对临床应用构成重大挑战。现有关于深度学习模型鲁棒性的工作很少,其中大部分都集中在对医学图像分类模型的攻击上。在本文中,提出了一种差分进化攻击(DEAttack)方法来为医学图像分割模型生成对抗性示例。与最广泛研究的基于梯度的攻击方法相比,我们的方法不需要额外的信息,例如网络的结构和权重。此外,受益于嵌入式差分进化算法,该算法可以保留优化空间的多样性。与基于梯度的方法相比,所提出的方法可以获得更好的结果,后者可以成功地攻击分割模型,仅扰动一小部分图像像素,表明医学图像分割模型更容易受到对抗性示例的影响。除了使用公共数据集评估模型鲁棒性攻击之外,我们的 DEAttack 方法还在临床诊断数据集上进行了测试,展示了其在医学图像分割中深度模型鲁棒性评估的卓越性能和优雅过程。

更新日期:2021-09-14
down
wechat
bug