当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-24 , DOI: arxiv-2011.11878 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-24 , DOI: arxiv-2011.11878 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon
The problem of fair classification can be mollified if we develop a method to
remove the embedded sensitive information from the classification features.
This line of separating the sensitive information is developed through the
causal inference, and the causal inference enables the counterfactual
generations to contrast the what-if case of the opposite sensitive attribute.
Along with this separation with the causality, a frequent assumption in the
deep latent causal model defines a single latent variable to absorb the entire
exogenous uncertainty of the causal graph. However, we claim that such
structure cannot distinguish the 1) information caused by the intervention
(i.e., sensitive variable) and 2) information correlated with the intervention
from the data. Therefore, this paper proposes Disentangled Causal Effect
Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling
the exogenous uncertainty into two latent variables: either 1) independent to
interventions or 2) correlated to interventions without causality.
Particularly, our disentangling approach preserves the latent variable
correlated to interventions in generating counterfactual examples. We show that
our method estimates the total effect and the counterfactual effect without a
complete causal graph. By adding a fairness regularization, DCEVAE generates a
counterfactual fair dataset while losing less original information. Also,
DCEVAE generates natural counterfactual images by only flipping sensitive
information. Additionally, we theoretically show the differences in the
covariance structures of DCEVAE and prior works from the perspective of the
latent disentanglement.
中文翻译:
解因果效应变分自编码器的反事实公平
如果我们开发一种从分类特征中删除嵌入的敏感信息的方法,则可以缓解公平分类的问题。区分敏感信息的这条线是通过因果推理而开发的,并且因果推理使反事实世代能够对比相反敏感属性的假设情况。除此与因果关系的分离外,深层潜在因果模型中的一个常见假设定义了一个单个潜在变量来吸收因果图的整个外生不确定性。但是,我们声称这种结构无法从数据中区分出1)由干预引起的信息(即敏感变量)和2)与干预相关的信息。因此,本文提出了解因果效应变分自编码器(DCEVAE),通过将外部不确定性分解为两个潜在变量来解决这一局限性:1)独立于干预措施或2)与无因果关系的干预措施相关。特别是,我们的解缠结方法保留了与生成反事实示例的干预措施相关的潜在变量。我们证明了我们的方法在没有完整因果图的情况下估计了总效果和反事实效果。通过添加公平正则化,DCEVAE可以生成反事实公平数据集,同时减少了原始信息。而且,DCEVAE仅通过翻转敏感信息即可生成自然的反事实图像。另外,
更新日期:2020-11-25
中文翻译:
解因果效应变分自编码器的反事实公平
如果我们开发一种从分类特征中删除嵌入的敏感信息的方法,则可以缓解公平分类的问题。区分敏感信息的这条线是通过因果推理而开发的,并且因果推理使反事实世代能够对比相反敏感属性的假设情况。除此与因果关系的分离外,深层潜在因果模型中的一个常见假设定义了一个单个潜在变量来吸收因果图的整个外生不确定性。但是,我们声称这种结构无法从数据中区分出1)由干预引起的信息(即敏感变量)和2)与干预相关的信息。因此,本文提出了解因果效应变分自编码器(DCEVAE),通过将外部不确定性分解为两个潜在变量来解决这一局限性:1)独立于干预措施或2)与无因果关系的干预措施相关。特别是,我们的解缠结方法保留了与生成反事实示例的干预措施相关的潜在变量。我们证明了我们的方法在没有完整因果图的情况下估计了总效果和反事实效果。通过添加公平正则化,DCEVAE可以生成反事实公平数据集,同时减少了原始信息。而且,DCEVAE仅通过翻转敏感信息即可生成自然的反事实图像。另外,