当前位置: X-MOL 学术arXiv.cs.MS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation
arXiv - CS - Mathematical Software Pub Date : 2021-04-01 , DOI: arxiv-2104.00507
Jakub Wiśniewski, Przemysław Biecek

Machine learning decision systems are getting omnipresent in our lives. From dating apps to rating loan seekers, algorithms affect both our well-being and future. Typically, however, these systems are not infallible. Moreover, complex predictive models are really eager to learn social biases present in historical data that can lead to increasing discrimination. If we want to create models responsibly then we need tools for in-depth validation of models also from the perspective of potential discrimination. This article introduces an R package fairmodels that helps to validate fairness and eliminate bias in classification models in an easy and flexible fashion. The fairmodels package offers a model-agnostic approach to bias detection, visualization and mitigation. The implemented set of functions and fairness metrics enables model fairness validation from different perspectives. The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model. The package is designed not only to examine a single model, but also to facilitate comparisons between multiple models.

中文翻译:

fairmodels:用于偏差检测,可视化和缓解的灵活工具

机器学习决策系统在我们的生活中无处不在。从约会应用程序到给寻求贷款者评级,算法都会影响我们的幸福感和未来。但是,通常,这些系统并不是绝对可靠的。此外,复杂的预测模型真的很想了解历史数据中存在的社会偏见,这可能导致歧视加剧。如果我们想负责任地创建模型,那么我们还需要从潜在歧视的角度对模型进行深入验证的工具。本文介绍了一种R包的Fairmodels,它有助于以简单,灵活的方式验证公平性并消除分类模型中的偏差。fairmodels软件包提供了与模型无关的方法来进行偏差检测,可视化和缓解。已实施的功能和公平性指标集可以从不同的角度进行模型公平性验证。该软件包包括一系列减轻偏差的方法,旨在减少模型中的歧视。该软件包的设计不仅可以检查单个模型,而且还可以简化多个模型之间的比较。
更新日期:2021-04-02
down
wechat
bug