当前位置: X-MOL 学术arXiv.cs.IT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FERMI: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information
arXiv - CS - Information Theory Pub Date : 2021-02-24 , DOI: arxiv-2102.12586
Andrew Lowy, Rakesh Pavan, Sina Baharlouei, Meisam Razaviyayn, Ahmad Beirami

In this paper, we propose a new notion of fairness violation, called Exponential R\'enyi Mutual Information (ERMI). We show that ERMI is a strong fairness violation notion in the sense that it provides upper bound guarantees on existing notions of fairness violation. We then propose the Fair Empirical Risk Minimization via ERMI regularization framework, called FERMI. Whereas most existing in-processing fairness algorithms are deterministic, we provide the first stochastic optimization method with a provable convergence guarantee for solving FERMI. Our stochastic algorithm is amenable to large-scale problems, as we demonstrate experimentally. In addition, we provide a batch (deterministic) algorithm for solving FERMI with the optimal rate of convergence. Both of our algorithms are applicable to problems with multiple (non-binary) sensitive attributes and non-binary targets. Extensive experiments show that FERMI achieves the most favorable tradeoffs between fairness violation and test accuracy across various problem setups compared with state-of-the-art baselines.

中文翻译:

FERMI:通过指数Rényi互信息将公平经验风险最小化

在本文中,我们提出了一种违反公平的新概念,称为指数R \'enyi互信息(ERMI)。从它为现有的违反公平性概念提供上限保证的意义上说,我们证明了ERMI是一个强大的违反公平性概念。然后,我们通过称为FERMI的ERMI正则化框架提出公平经验风险最小化。尽管大多数现有的处理中公平性算法都是确定性的,但我们提供了第一种具有可证明收敛性的随机优化方法来求解FERMI。如我们的实验所示,我们的随机算法适用于大规模问题。此外,我们提供了一种用于以最佳收敛速度求解FERMI的批处理(确定性)算法。我们的两种算法都适用于具有多个(非二进制)敏感属性和非二进制目标的问题。大量实验表明,与最先进的基准相比,FERMI在各种问题设置下的公平性违反和测试准确性之间实现了最有利的折衷。
更新日期:2021-02-26
down
wechat
bug