Learning Latent Causal Structures with a Redundant Input Neural Network

Young JD, Andrews B, Cooper GF, Lu X. Learning Latent Causal Structures with a Redundant Input Neural Network. arXiv:2003.13135v1 29 Mar 2020.

 

Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learn- ing model, which we call a redundant input neural network (RINN), with a modified ar- chitecture and a regularized objective function to find causal relationships between input, hid- den, and output variables. More specifically, our model allows input variables to directly in- teract with all latent variables in a neural net- work to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and la- tent variables makes the latent variables par- tially interpretable; furthermore, the connec- tivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure be- tween input and output variables.

Publication Year: 
2020
Publication Credits: 
Young JD, Andrews B, Cooper GF, Lu X.
Publication Download: 
AttachmentSize
PDF icon Young_arXiv.pdf1.12 MB
^