当前位置: X-MOL 学术ACM Trans. Appl. Percept. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evaluating Automated Face Identity-Masking Methods with Human Perception and a Deep Convolutional Neural Network
ACM Transactions on Applied Perception ( IF 1.9 ) Pub Date : 2021-01-01 , DOI: 10.1145/3422988
Kimberley D. Orsten Hooge 1 , Asal Baragchizadeh 1 , Thomas P. Karnowski 2 , David S. Bolme 2 , Regina Ferrell 2 , Parisa R. Jesudasen 1 , Carlos D. Castillo 3 , Alice J. O’toole 1
Affiliation  

Face de-identification (or “masking”) algorithms have been developed in response to the prevalent use of video recordings in public places. We evaluated the success of face identity masking for human perceivers and a deep convolutional neural network (DCNN). Eight de-identification algorithms were applied to videos of drivers’ faces, while they actively operated a motor vehicle. These masks were pre-selected to be applicable to low-quality video and to maintain coarse information about facial actions. Humans studied high-resolution images to learn driver identities and were tested on their recognition of active drivers in low-resolution videos. Faces in the videos were either unmasked or were masked by one of the eight algorithms. When participants were tested immediately after learning (Experiment 1), all masks reduced identification, with six of eight masks reducing identification to extremely poor performance. In a second experiment, two of the most effective masks were tested after a delay of 7 or 28 days. The delay did not further reduce identification of the masked faces. In all masked conditions, participants maintained stringent decision criteria, with low confidence in recognition, further indicating the effectiveness of the masks. Next, the DCNN performed an identity-matching task between high-resolution images and masked videos—a task analogous to that done by humans. The pattern of accuracy for the DCNN mirrored some, but not all, aspects of human performance, highlighting the need to test the effectiveness of identity masking for both humans and machines. The DCNN was also tested on its ability to match identity between masked and unmasked versions of the same video, based only on the face. DCNN performance for the eight masks offers insight into the nature of the information in faces that is coded in these networks.

中文翻译:

使用人类感知和深度卷积神经网络评估自动人脸身份屏蔽方法

人脸去识别(或“掩蔽”)算法是为了应对公共场所普遍使用的视频记录而开发的。我们评估了人类感知者的面部身份掩蔽和深度卷积神经网络 (DCNN) 的成功。八种去识别算法被应用于驾驶员面部视频,同时他们主动驾驶汽车。这些掩码被预先选择为适用于低质量视频并保持有关面部动作的粗略信息。人类研究高分辨率图像以了解驾驶员身份,并在低分辨率视频中对主动驾驶员的识别进行了测试。视频中的人脸要么未蒙面,要么被八种算法之一蒙面。当参与者在学习后立即进行测试(实验 1)时,所有的面具都会减少识别,八个面具中的六个将识别降低到极差的性能。在第二个实验中,两个最有效的口罩在延迟 7 或 28 天后进行了测试。延迟并没有进一步减少对蒙面面孔的识别。在所有蒙面条件下,参与者都保持严格的决策标准,对识别的信心低,进一步表明了口罩的有效性。接下来,DCNN 在高分辨率图像和蒙面视频之间执行身份匹配任务——这项任务类似于人类完成的任务。DCNN 的准确性模式反映了人类表现的一些但不是全部方面,强调需要测试身份掩蔽对人类和机器的有效性。DCNN 还测试了其仅基于面部匹配同一视频的蒙版和未蒙版版本之间的身份的能力。八个面具的 DCNN 性能提供了对在这些网络中编码的面部信息的性质的洞察。
更新日期:2021-01-01
down
wechat
bug