Skip to main content

Differentiable Causal Discovery Under Heteroscedastic Noise

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13623))

Included in the following conference series:

  • 1697 Accesses

Abstract

We consider the problem of estimating directed acyclic graphs from observational data. Many studies on functional causal models assume the independence of noise terms. Thus, they suffer from the typical violation of model assumption: heteroscedasticity. Several recent studies have assumed heteroscedastic noise instead of additive noise in data generation, though most of the estimation algorithms are for bivariate data. This study aims to improve the capability of continuous optimization-based methods so that they can handle heteroscedastic noise under multivariate non-linear data with no latent confounders. Numerical experiments on synthetic data and fMRI simulation data show that our estimation algorithm improves the estimation of the causal structure under heteroscedastic noise. We also applied our estimation algorithm to real-world data collected from a ceramic substrate manufacturing process, and the results prove the possibility of using the estimated causal graph to accelerate quality improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bollerslev, T., Engle, R.F., Nelson, D.B.: Arch models. Handb. Econometrics 4, 2959–3038 (1994)

    Article  MathSciNet  Google Scholar 

  2. Byrd, R.H., Lu, P., Nocedal, J., Zhu, C.: A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16(5), 1190–1208 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cai, R., Chen, W., Qiao, J., Hao, Z.: On the role of entropy-based loss for learning causal structures with continuous optimization. arXiv preprint arXiv:2106.02835 (2021)

  4. Cai, R., Ye, J., Qiao, J., Fu, H., Hao, Z.: FOM: fourth-order moment based causal direction identification on the heteroscedastic data. Neural Netw. 124, 193–201 (2020)

    Article  Google Scholar 

  5. Colombo, D., Maathuis, M.H., et al.: Order-independent constraint-based causal structure learning. J. Mach. Learn. Res. 15(1), 3741–3782 (2014)

    MathSciNet  MATH  Google Scholar 

  6. Davidson, R., MacKinnon, J.G.: Econometric Theory and Methods, vol. 5. Oxford University Press, New York (2004)

    Google Scholar 

  7. Glymour, C., Zhang, K., Spirtes, P.: Review of causal discovery methods based on graphical models. Front. Genet. 10, 524 (2019)

    Article  Google Scholar 

  8. Harvey, A.C.: Estimating regression models with multiplicative heteroscedasticity. Econometrica: J. Econometric Soc. 461–465 (1976)

    Google Scholar 

  9. Hoyer, P., Janzing, D., Mooij, J.M., Peters, J., Schölkopf, B.: Nonlinear causal discovery with additive noise models. Adv. Neural Inf. Process. Syst. 21, 689–696 (2008)

    MATH  Google Scholar 

  10. Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. Wiley, New York (2001)

    Google Scholar 

  11. Hyvärinen, A., Oja, E.: Independent component analysis by general nonlinear Hebbian-like learning rules. Sig. Process. 64(3), 301–313 (1998)

    Google Scholar 

  12. Khemakhem, I., Monti, R., Leech, R., Hyvärinen, A.: Causal autoregressive flows. In: International Conference on Artificial Intelligence and Statistics, pp. 3520–3528. PMLR (2021)

    Google Scholar 

  13. Kupek, E.: Detection of unknown confounders by Bayesian confirmatory factor analysis. Adv. Stud. Med. Sci. 1(3), 143–56 (2013)

    Google Scholar 

  14. Londei, A., D’Ausilio, A., Basso, D., Belardinelli, M.O.: A new method for detecting causality in fMRI data of cognitive processing. Cogn. Process. 7(1), 42–52 (2006)

    Article  Google Scholar 

  15. Ma, S., Tourani, R.: Predictive and causal implications of using Shapley value for model interpretation. In: Proceedings of the 2020 KDD Workshop on Causal Discovery, pp. 23–38. PMLR (2020)

    Google Scholar 

  16. Makhlouf, K., Zhioua, S., Palamidessi, C.: Survey on causal-based machine learning fairness notions. arXiv preprint arXiv:2010.09553 (2020)

  17. Moneta, A., Entner, D., Hoyer, P.O., Coad, A.: Causal inference by independent component analysis: theory and applications. Oxford Bull. Econ. Stat. 75(5), 705–730 (2013)

    Article  Google Scholar 

  18. Pearl, J.: Causality. Cambridge University Press, Cambridge (2009)

    Google Scholar 

  19. Peters, J., Bühlmann, P.: Identifiability of Gaussian structural equation models with equal error variances. Biometrika 101(1), 219–228 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Peters, J., Mooij, J.M., Janzing, D., Sch"olkopf, B.: Causal discovery with continuous additive noise models. J. Mach. Learn. Res. 15, 2009–2053 (2014)

    Google Scholar 

  21. Reisach, A.G., Seiler, C., Weichwald, S.: Beware of the simulated DAG! Varsortability in additive noise models. arXiv preprint arXiv:2102.13647 (2021)

  22. Rojas-Carulla, M., Schölkopf, B., Turner, R., Peters, J.: Invariant models for causal transfer learning. J. Mach. Learn. Res. 19(1), 1309–1342 (2018)

    MathSciNet  MATH  Google Scholar 

  23. Shimizu, S., Hoyer, P.O., Hyvärinen, A., Kerminen, A., Jordan, M.: A linear non-gaussian acyclic model for causal discovery. J. Mach. Learn. Res. 7(10) (2006)

    Google Scholar 

  24. Shimizu, S., et al.: DirectLiNGAM: a direct method for learning a linear non-gaussian structural equation model. J. Mach. Learn. Res. 12, 1225–1248 (2011)

    MathSciNet  MATH  Google Scholar 

  25. Smith, S.M., et al.: Network modelling methods for FMRI. Neuroimage 54(2), 875–891 (2011)

    Article  Google Scholar 

  26. Spirtes, P., Glymour, C., Scheines, R., Kauffman, S., Aimale, V., Wimberly, F.: Constructing Bayesian network models of gene expression networks from microarray data (2000)

    Google Scholar 

  27. Spirtes, P., Glymour, C.N., Scheines, R., Heckerman, D.: Causation, Prediction, and Search. MIT Press, Cambridge (2000)

    Google Scholar 

  28. Strobl, E.V., Lasko, T.A.: Identifying patient-specific root causes with the heteroscedastic noise model. arXiv preprint arXiv:2205.13085 (2022)

  29. Vuković, M., Thalmann, S.: Causal discovery in manufacturing: a structured literature review. J. Manuf. Mater. Process. 6(1), 10 (2022)

    Google Scholar 

  30. Xu, S., Mian, O.A., Marx, A., Vreeken, J.: Inferring cause and effect in the presence of heteroscedastic noise. In: International Conference on Machine Learning, pp. 24615–24630. PMLR (2022)

    Google Scholar 

  31. Zheng, X., Aragam, B., Ravikumar, P., Xing, E.P.: DAGs with NO TEARS: continuous optimization for structure learning. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 9492–9503 (2018)

    Google Scholar 

  32. Zheng, X., Dan, C., Aragam, B., Ravikumar, P., Xing, E.: Learning sparse nonparametric DAGs. In: International Conference on Artificial Intelligence and Statistics, pp. 3414–3425. PMLR (2020)

    Google Scholar 

  33. Zhou, S.: Thresholding procedures for high dimensional variable selection and statistical estimation. IN: Advances in Neural Information Processing Systems, vol. 22 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Genta Kikuchi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kikuchi, G. (2023). Differentiable Causal Discovery Under Heteroscedastic Noise. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30105-6_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30104-9

  • Online ISBN: 978-3-031-30105-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics