当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fair inference on error-prone outcomes
arXiv - CS - Computers and Society Pub Date : 2020-03-17 , DOI: arxiv-2003.07621
Laura Boeschoten, Erik-Jan van Kesteren, Ayoub Bagheri, Daniel L. Oberski

Fair inference in supervised learning is an important and active area of research, yielding a range of useful methods to assess and account for fairness criteria when predicting ground truth targets. As shown in recent work, however, when target labels are error-prone, potential prediction unfairness can arise from measurement error. In this paper, we show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest. To remedy this problem, we suggest a framework resulting from the combination of two existing literatures: fair ML methods, such as those found in the counterfactual fairness literature on the one hand, and, on the other, measurement models found in the statistical literature. We discuss these approaches and their connection resulting in our framework. In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously.

中文翻译:

对容易出错的结果的公平推断

监督学习中的公平推理是一个重要且活跃的研究领域,产生了一系列有用的方法来评估和解释预测真实目标时的公平性标准。然而,正如最近的工作所示,当目标标签容易出错时,测量错误可能会导致潜在的预测不公平。在本文中,我们表明,当使用容易出错的代理目标时,现有的评估和校准公平标准的方法不会扩展到真正感兴趣的目标变量。为了解决这个问题,我们提出了一个结合两个现有文献的框架:一方面是公平 ML 方法,例如在反事实公平文献中发现的方法,另一方面是在统计文献中发现的测量模型。我们讨论了这些方法以及它们之间的联系,从而形成了我们的框架。在医疗保健决策问题中,我们发现使用潜在变量模型来解释测量误差可以消除之前检测到的不公平。
更新日期:2020-03-18
down
wechat
bug