当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Post-comparison mitigation of demographic bias in face recognition using fair score normalization
Pattern Recognition Letters ( IF 3.9 ) Pub Date : 2020-11-06 , DOI: 10.1016/j.patrec.2020.11.007
Philipp Terhörst , Jan Niklas Kolf , Naser Damer , Florian Kirchbuchner , Arjan Kuijper

Current face recognition systems achieve high progress on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Consequently, an easily integrable solution is needed to reduce the discriminatory effect of these biased systems. Previous work mainly focused on learning less biased face representations, which comes at the cost of a strongly degraded overall recognition performance. In this work, we propose a novel unsupervised fair score normalization approach that is specifically designed to reduce the effect of bias in face recognition and subsequently lead to a significant overall performance boost. Our hypothesis is built on the notation of individual fairness by designing a normalization approach that leads to treating “similar” individuals “similarly”. Experiments were conducted on three publicly available datasets captured under controlled and in-the-wild circumstances. Results demonstrate that our solution reduces demographic biases, e.g. by up to 82.7% in the case when gender is considered. Moreover, it mitigates the bias more consistently than existing works. In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 103 and up to 82.9% at a false match rate of 105. Additionally, it is easily integrable into existing recognition systems and not limited to face biometrics.



中文翻译:

使用公平分数归一化法比较人脸识别后的人口统计学偏差

当前的面部识别系统在一些基准测试中取得了很大的进步。尽管取得了这一进展,但最近的工作表明,这些系统对人口统计子群体有很大的偏见。因此,需要一种易于集成的解决方案来减少这些偏置系统的歧视性影响。先前的工作主要集中在学习较少偏见的面部表示上,但这是以大大降低整体识别性能为代价的。在这项工作中,我们提出了一种新颖的无监督公平分数归一化方法,该方法专门设计用于减少人脸识别中的偏见影响,从而显着提高整体性能。我们的假设基于个人公平性的概念,通过设计归一化方法来导致“相似”地对待“相似”个体。实验是在受控和野外条件下捕获的三个可公开获得的数据集上进行的。结果表明,我们的解决方案减少了人口偏差,例如,考虑到性别,减少了82.7%。此外,与现有作品相比,它可以更一致地缓解偏差。与以前的作品相比,我们的公平归一化方法在误匹配率为5%的情况下将整体性能提高了53.2%。10-3 错误匹配率高达82.9% 10-5。此外,它可以轻松集成到现有的识别系统中,而不仅限于面部生物识别。

更新日期:2020-11-13
down
wechat
bug