当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Advbox: a toolbox to generate adversarial examples that fool neural networks
arXiv - CS - Machine Learning Pub Date : 2020-01-13 , DOI: arxiv-2001.05574
Dou Goodman, Hao Xin, Wang Yang, Wu Yuesheng, Xiong Junfeng, and Zhang Huan

In recent years, neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. Recent studies have shown that they are all vulnerable to the attack of adversarial examples. Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful neural networks. \emph{Advbox} is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, TensorFlow and it can benchmark the robustness of machine learning models. Compared to previous work, our platform supports black box attacks on Machine-Learning-as-a-service, as well as more attack scenarios, such as Face Recognition Attack, Stealth T-shirt, and DeepFake Face Detect. The code is licensed under the Apache 2.0 and is openly available at https://github.com/advboxes/AdvBox. Advbox now supports Python 3.

中文翻译:

Advbox:生成欺骗神经网络的对抗样本的工具箱

近年来,神经网络已广泛应用于计算机视觉任务,尤其是视觉分类问题,据报道,新算法可以实现甚至超越人类的表现。最近的研究表明,它们都容易受到对抗样本的攻击。对输入图像的微小且通常难以察觉的扰动足以欺骗最强大的神经网络。\emph{Advbox} 是一个生成对抗样本的工具箱,可以欺骗 PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow 中的神经网络,并且可以对机器学习模型的稳健性进行基准测试。与之前的工作相比,我们的平台支持机器学习即服务的黑盒攻击,以及更多的攻击场景,如人脸识别攻击、隐形 T 恤和 DeepFake 人脸检测。该代码在 Apache 2.0 下获得许可,可在 https://github.com/advboxes/AdvBox 上公开获得。Advbox 现在支持 Python 3。
更新日期:2020-08-28
down
wechat
bug