当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fair Normalizing Flows
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05937
Mislav Balunović, Anian Ruoss, Martin Vechev

Fair representation learning is an attractive approach that promises fairness of downstream predictors by encoding sensitive data. Unfortunately, recent work has shown that strong adversarial predictors can still exhibit unfairness by recovering sensitive attributes from these representations. In this work, we present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations. Specifically, we consider a practical setting where we can estimate the probability density for sensitive groups. The key idea is to model the encoder as a normalizing flow trained to minimize the statistical distance between the latent representations of different groups. The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor. We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning, on a variety of challenging real-world datasets.

中文翻译:

公平规范化流程

公平表示学习是一种有吸引力的方法,它通过对敏感数据进行编码来保证下游预测器的公平性。不幸的是,最近的工作表明,通过从这些表示中恢复敏感属性,强大的对抗性预测器仍然可以表现出不公平性。在这项工作中,我们提出了公平归一化流 (FNF),这是一种新方法,为学习的表示提供更严格的公平保证。具体来说,我们考虑了一个实际设置,我们可以在其中估计敏感群体的概率密度。关键思想是将编码器建模为经过训练的归一化流,以最小化不同组的潜在表示之间的统计距离。FNF 的主要优点是它的精确似然计算使我们能够保证任何潜在对抗性下游预测器的最大不公平性。我们通过实验证明了 FNF 在各种具有挑战性的现实世界数据集上执行各种群体公平概念以及其他有吸引力的特性(如可解释性和迁移学习)方面的有效性。
更新日期:2021-06-11
down
wechat
bug