当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive Attributes
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-28 , DOI: arxiv-2107.13625 William Paul, Philippe Burlina
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-28 , DOI: arxiv-2107.13625 William Paul, Philippe Burlina
When deploying artificial intelligence (AI) in the real world, being able to
trust the operation of the AI by characterizing how it performs is an
ever-present and important topic. An important and still largely unexplored
task in this characterization is determining major factors within the real
world that affect the AI's behavior, such as weather conditions or lighting,
and either a) being able to give justification for why it may have failed or b)
eliminating the influence the factor has. Determining these sensitive factors
heavily relies on collected data that is diverse enough to cover numerous
combinations of these factors, which becomes more onerous when having many
potential sensitive factors or operating in complex environments. This paper
investigates methods that discover and separate out individual semantic
sensitive factors from a given dataset to conduct this characterization as well
as addressing mitigation of these factors' sensitivity. We also broaden
remediation of fairness, which normally only addresses socially relevant
factors, and widen it to deal with the desensitization of AI with regard to all
possible aspects of variation in the domain. The proposed methods which
discover these major factors reduce the potentially onerous demands of
collecting a sufficiently diverse dataset. In experiments using the road sign
(GTSRB) and facial imagery (CelebA) datasets, we show the promise of using this
scheme to perform this characterization and remediation and demonstrate that
our approach outperforms state of the art approaches.
中文翻译:
概括公平性:未知敏感属性的发现和缓解
在现实世界中部署人工智能 (AI) 时,能够通过表征其执行方式来信任 AI 的操作是一个永远存在且重要的话题。在此表征中,一项重要且尚未探索的任务是确定现实世界中影响 AI 行为的主要因素,例如天气条件或光照,并且 a) 能够为其可能失败的原因提供理由或 b) 消除因素的影响。确定这些敏感因素在很大程度上依赖于收集的数据,这些数据足够多样化以涵盖这些因素的多种组合,当有许多潜在的敏感因素或在复杂环境中运行时,这变得更加繁重。本文研究了从给定数据集中发现和分离出单个语义敏感因素以进行这种表征以及解决这些因素敏感性的缓解的方法。我们还扩大了公平的补救措施,这通常只解决与社会相关的因素,并将其扩大到处理人工智能在该领域所有可能的变化方面的脱敏。发现这些主要因素的拟议方法减少了收集足够多样化数据集的潜在繁重需求。在使用道路标志 (GTSRB) 和面部图像 (CelebA) 数据集的实验中,我们展示了使用该方案进行这种表征和修复的前景,并证明我们的方法优于最先进的方法。
更新日期:2021-07-30
中文翻译:
概括公平性:未知敏感属性的发现和缓解
在现实世界中部署人工智能 (AI) 时,能够通过表征其执行方式来信任 AI 的操作是一个永远存在且重要的话题。在此表征中,一项重要且尚未探索的任务是确定现实世界中影响 AI 行为的主要因素,例如天气条件或光照,并且 a) 能够为其可能失败的原因提供理由或 b) 消除因素的影响。确定这些敏感因素在很大程度上依赖于收集的数据,这些数据足够多样化以涵盖这些因素的多种组合,当有许多潜在的敏感因素或在复杂环境中运行时,这变得更加繁重。本文研究了从给定数据集中发现和分离出单个语义敏感因素以进行这种表征以及解决这些因素敏感性的缓解的方法。我们还扩大了公平的补救措施,这通常只解决与社会相关的因素,并将其扩大到处理人工智能在该领域所有可能的变化方面的脱敏。发现这些主要因素的拟议方法减少了收集足够多样化数据集的潜在繁重需求。在使用道路标志 (GTSRB) 和面部图像 (CelebA) 数据集的实验中,我们展示了使用该方案进行这种表征和修复的前景,并证明我们的方法优于最先进的方法。