当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Latent Causal Structures with a Redundant Input Neural Network
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-03-29 , DOI: arxiv-2003.13135
Jonathan D. Young, Bryan Andrews, Gregory F. Cooper, Xinghua Lu

Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function to find causal relationships between input, hidden, and output variables. More specifically, our model allows input variables to directly interact with all latent variables in a neural network to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and latent variables makes the latent variables partially interpretable; furthermore, the connectivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables.

中文翻译:

使用冗余输入神经网络学习潜在因果结构

大多数因果发现算法在一组观察变量中找到因果结构。学习潜在变量之间的因果结构仍然是一个重要的开放问题,尤其是在使用高维数据时。在本文中,我们解决了一个已知输入导致输出的问题,这些因果关系由一组未知数量的潜在变量之间的因果网络编码。我们开发了一个深度学习模型,我们称之为冗余输入神经网络 (RINN),它具有修改后的架构和正则化的目标函数,用于查找输入、隐藏和输出变量之间的因果关系。进一步来说,我们的模型允许输入变量直接与神经网络中的所有潜在变量交互,以影响潜在变量应该编码的信息,以便准确地生成输出变量。在这种情况下,输入变量和潜在变量之间的直接联系使得潜在变量可以部分解释;此外,神经网络中潜在变量之间的连通性用于模拟它们彼此之间以及与输出变量之间的潜在因果关系。一系列仿真实验为RINN方法能够成功恢复输入和输出变量之间的潜在因果结构提供支持。神经网络中潜在变量之间的连通性用于对它们彼此之间以及与输出变量之间的潜在因果关系进行建模。一系列仿真实验为RINN方法能够成功恢复输入和输出变量之间的潜在因果结构提供支持。神经网络中潜在变量之间的连通性用于对它们彼此之间以及与输出变量之间的潜在因果关系进行建模。一系列仿真实验为RINN方法能够成功恢复输入和输出变量之间的潜在因果结构提供支持。
更新日期:2020-09-09
down
wechat
bug