当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Simulated Adversarial Testing of Face Recognition Models
arXiv - CS - Computers and Society Pub Date : 2021-06-08 , DOI: arxiv-2106.04569
Nataniel Ruiz, Adam Kortylewski, Weichao Qiu, Cihang Xie, Sarah Adel Bargal, Alan Yuille, Stan Sclaroff

Most machine learning models are validated and tested on fixed datasets. This can give an incomplete picture of the capabilities and weaknesses of the model. Such weaknesses can be revealed at test time in the real world. The risks involved in such failures can be loss of profits, loss of time or even loss of life in certain critical applications. In order to alleviate this issue, simulators can be controlled in a fine-grained manner using interpretable parameters to explore the semantic image manifold. In this work, we propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner in order to find weaknesses in the model before deploying it in critical scenarios. We apply this model in a face recognition scenario. We are the first to show that weaknesses of models trained on real data can be discovered using simulated samples. Using our proposed method, we can find adversarial synthetic faces that fool contemporary face recognition models. This demonstrates the fact that these models have weaknesses that are not measured by commonly used validation datasets. We hypothesize that this type of adversarial examples are not isolated, but usually lie in connected components in the latent space of the simulator. We present a method to find these adversarial regions as opposed to the typical adversarial points found in the adversarial example literature.

中文翻译:

人脸识别模型的模拟对抗测试

大多数机器学习模型都在固定数据集上进行了验证和测试。这可能会不完整地描述模型的功能和弱点。这些弱点可以在现实世界中的测试时暴露出来。在某些关键应用中,此类故障所涉及的风险可能是利润损失、时间损失甚至生命损失。为了缓解这个问题,可以使用可解释的参数以细粒度的方式控制模拟器来探索语义图像流形。在这项工作中,我们提出了一个框架,用于学习如何以对抗方式使用模拟器测试机器学习算法,以便在将模型部署到关键场景之前发现模型中的弱点。我们将此模型应用于人脸识别场景。我们是第一个表明可以使用模拟样本发现在真实数据上训练的模型的弱点的人。使用我们提出的方法,我们可以找到欺骗当代人脸识别模型的对抗性合成人脸。这表明这些模型存在常用验证数据集无法衡量的弱点。我们假设这种类型的对抗样本不是孤立的,而是通常位于模拟器潜在空间中的连接组件中。我们提出了一种找到这些对抗区域的方法,而不是在对抗示例文献中发现的典型对抗点。这表明这些模型存在常用验证数据集无法衡量的弱点。我们假设这种类型的对抗样本不是孤立的,而是通常位于模拟器潜在空间中的连接组件中。我们提出了一种找到这些对抗区域的方法,而不是在对抗示例文献中发现的典型对抗点。这表明这些模型存在常用验证数据集无法衡量的弱点。我们假设这种类型的对抗样本不是孤立的,而是通常位于模拟器潜在空间中的连接组件中。我们提出了一种找到这些对抗区域的方法,而不是在对抗示例文献中发现的典型对抗点。
更新日期:2021-06-09
down
wechat
bug