当前位置: X-MOL 学术Int. J. Pattern Recognit. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adv-Emotion: The Facial Expression Adversarial Attack
International Journal of Pattern Recognition and Artificial Intelligence ( IF 0.9 ) Pub Date : 2021-08-20 , DOI: 10.1142/s0218001421520169
Yudao Sun 1 , Chunhua Wu 1 , Kangfeng Zheng , Xinxin Niu
Affiliation  

Artificial intelligence is developing rapidly in the direction of intellectualization and humanization. Recent studies have shown the vulnerability of many deep learning models to adversarial examples, but there are fewer studies on adversarial examples attacking facial expression recognition systems. Human–computer interaction requires facial expression recognition, so the security demands of artificial intelligence humanization should be considered. Inspired by facial expression recognition, we want to explore the characteristics of facial expression recognition adversarial examples. In this paper, we are the first to study facial expression adversarial examples (FEAEs) and propose an adversarial attack method on facial expression recognition systems, a novel measurement method on the adversarial hardness of FEAEs, and two evaluation metrics on FEAE transferability. The experimental results illustrate that our approach is superior to other gradient-based attack methods. Finding FEAEs can attack not only facial expression recognition systems but also face recognition systems. The transferability and adversarial hardness of FEAEs can be measured effectively and accurately.

中文翻译:

Adv-Emotion:面部表情对抗攻击

人工智能正朝着智能化、人性化方向快速发展。最近的研究表明,许多深度学习模型对对抗样本的脆弱性,但对攻击面部表情识别系统的对抗样本的研究较少。人机交互需要人脸表情识别,因此需要考虑人工智能人性化的安全需求。受面部表情识别的启发,我们想探索面部表情识别对抗样本的特征。在本文中,我们率先研究了面部表情对抗样本(FEAEs),并提出了一种针对面部表情识别系统的对抗性攻击方法,一种新的FEAE对抗性硬度测量方法,以及关于 FEAE 可迁移性的两个评估指标。实验结果表明,我们的方法优于其他基于梯度的攻击方法。寻找 FEAE 不仅可以攻击面部表情识别系统,还可以攻击面部识别系统。FEAE 的可转移性和对抗性硬度可以有效且准确地测量。
更新日期:2021-08-20
down
wechat
bug