Skip to main content
Log in

Learning dynamic causal mechanisms from non-stationary data

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Causal discovery from non-stationary time series is an important but challenging task. Most existing non-stationary approaches only consider the changes of causal coefficients, which are merely satisfied in real-world scenarios. In this paper, we introduce a Gaussian-based Variational Temporal Abstraction model (GVTA) to detect and learn non-stationary causal mechanisms from multiple time series. First, we utilize a hierarchical cyclic state-space model to detect the stationary states from the non-stationary time series. Second, we use the Gaussian process algorithm to estimate the causal mechanism for each stationary state. Experimental results on both simulation and real-world data demonstrate the correctness and effectiveness of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. The EEG data is publicly available from https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State

References

  1. Spirtes P, Zhang K (2016) Causal discovery and inference: concepts and recent methodological advances. In: Applied informatics, vol 3, pp 1–28

  2. Ghysels E, Hill JB, Motegi K (2016) Testing for granger causality with mixed frequency data. J Econ 192(1):207–230

    Article  MathSciNet  MATH  Google Scholar 

  3. Grosse-Wentrup M, Janzing D, Siegel M, Schölkopf B (2016) Identification of causal relations in neuroimaging data with latent confounders: an instrumental variable approach. NeuroImage 125:825–833

    Article  Google Scholar 

  4. Chen W, Cai R, Hao Z, Yuan C, Xie F (2020) Mining hidden non-redundant causal relationships in online social networks. Neural Comput & Applic 32(11):6913–6923

    Article  Google Scholar 

  5. ZHANG K (2009) On the identifiability of the post-nonlinear causal model. In: Proceedings of the 25th conference on uncertainty in artificial intelligence (UAI), vol 647

  6. Hyvärinen A, Shimizu S, Hoyer PO (2008) Causal modelling combining instantaneous and lagged effects: an identifiable model based on non-gaussianity. In: Proceedings of the 25th international conference on machine learning, pp 424–431

  7. Malinsky D, Spirtes P (2018) Causal structure learning from multivariate time series in settings with unmeasured confounding. In: Proceedings of 2018 ACM SIGKDD workshop on causal discovery, pp 23–47

  8. Runge J, Nowack P, Kretschmer M, Flaxman S, Sejdinovic D (2019) Detecting and quantifying causal associations in large nonlinear time series datasets. Science Advances 5(11):4996

    Article  Google Scholar 

  9. Huang B, Zhang K, Gong M, Glymour C (2019) Causal discovery and forecasting in nonstationary environments with state-space models. In: International conference on machine learning, pp 2901–2910. PMLR

  10. Zhang K, Huang B, Zhang J, Glymour C, Schölkopf B (2017) Causal discovery from nonstationary/heterogeneous data: Skeleton estimation and orientation determination. In: IJCAI: Proceedings of the conference, vol 2017, NIH Public Access. p 1347

  11. Huang B, Zhang K, Zhang J, Ramsey J, Sanchez-Romero R, Glymour C, Schölkopf B (2020) Causal discovery from heterogeneous/nonstationary data. J Mach Learn Res 21(89):1–53

    MathSciNet  MATH  Google Scholar 

  12. Huang B, Zhang K, Schölkopf B (2015) Identification of time-dependent causal model: a gaussian process treatment. In: Twenty-fourth international joint conference on artificial intelligence

  13. Duan P, Yang F, Chen T, Shah SL (2013) Direct causality detection via the transfer entropy approach. IEEE Transactions on Control Systems Technology 21(6):2052–2066

    Article  Google Scholar 

  14. Peters J, Janzing D, Schölkopf B (2013) Causal inference on time series using restricted structural equation models. In: Advances in neural information processing systems, pp 154–162

  15. Hyvärinen A, Zhang K, Shimizu S, Hoyer PO (2010) Estimation of a structural vector autoregression model using non-gaussianity. J Mach Learn Res 11(5)

  16. Yang S, Gao T, Wang J, Deng B, Lansdell B, Linares-Barranco B (2021) Efficient spike-driven learning with dendritic event-based processing. Front Neurosci 15:97

    Article  Google Scholar 

  17. Yang S, Wang J, Deng B, Azghadi MR, Linares-Barranco B (2021) Neuromorphic context-dependent learning framework with fault-tolerant spike routing. IEEE Transactions on Neural Networks and Learning Systems

  18. Yang S, Deng B, Wang J, Li H, Lu M, Che Y, Wei X, Loparo KA (2019) Scalable digital neuromorphic architecture for large-scale biophysically meaningful neural network with multi-compartment neurons. IEEE Trans Neural Netw Learn Syst 31(1):148–162

    Article  Google Scholar 

  19. Yang S, Wang J, Li S, Deng B, Wei X, Yu H, Li H (2015) Cost-efficient fpga implementation of basal ganglia and their parkinsonian analysis. Neural Netw 71:62–75

    Article  Google Scholar 

  20. Yang S, Wang J, Zhang N, Deng B, Pang Y, Azghadi MR (2021) Cerebellumorphic: large-scale neuromorphic model and architecture for supervised motor learning. IEEE Transactions on Neural Networks and Learning Systems

  21. Yang S, Wei X, Deng B, Liu C, Li H, Wang J (2018) Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis. Physica A: Statistical Mechanics and its Applications 494:484–502

    Article  MathSciNet  MATH  Google Scholar 

  22. Nauta M, Bucur D, Seifert C (2019) Causal discovery with attention-based convolutional neural networks. Machine Learning and Knowledge Extraction 1(1):312–340

    Article  Google Scholar 

  23. Zheng X, Aragam B, Ravikumar PK, Xing EP (2018) Dags with no tears: Continuous optimization for structure learning. Advances in Neural Information Processing Systems, 31

  24. Zhang W, Liao J, Zhang Y, Liu L (2022) Cmgan: a generative adversarial network embedded with causal matrix. Appl Intell, 1573–7497

  25. Li Z, Xiang Z, Gong W, Wang H (2022) Unified model for collective and point anomaly detection using stacked temporal convolution networks. Appl Intell 52:1573–7497

    Google Scholar 

  26. Havlicek M, Friston KJ, Jan J, Brazdil M, Calhoun VD (2011) Dynamic modeling of neuronal responses in fmri using cubature kalman filtering. Neuroimage 56(4):2109–2128

    Article  Google Scholar 

  27. Ghassami A, Kiyavash N, Huang B, Zhang K (2018) Multi-domain causal structure learning in linear systems. Advances in Neural Information Processing Systems, 31

  28. Peters J, Bühlmann P, Meinshausen N (2016) Causal inference using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society Series B (Statistical Methodology) 78 (5):947–1012

    Article  MathSciNet  MATH  Google Scholar 

  29. Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12(61):2121–2159

    MathSciNet  MATH  Google Scholar 

  30. Seeger M (2004) Gaussian processes for machine learning. Int J Neural Syst 14(02):69–106

    Article  Google Scholar 

  31. Lütkepohl H (2013) Vector autoregressive models. In: Handbook of Research Methods and Applications in Empirical Macroeconomics, pp 1645–1647

  32. Tank A, Covert I, Foti N, Shojaie A, Fox EB (2021) Neural granger causality. IEEE Trans Pattern Anal Mach Intell, 1–1

  33. Boutani H, Ohsuga M (2013) Applicability of the emotiv eeg neuroheadset as a user-friendly input interface. In: 2013 35Th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 1346–1349

  34. Quigg M, Quigg M (2006) EEG Pearls Mosby Elsevier

  35. Boniface S (1995) Atlas of adult electroencephalography. Journal of Neurology Neurosurgery, and Psychiatry 59(3):346

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported in part by National Key R&D Program of China (2021ZD0111501), National Science Fund for Excellent Young Scholars (62122022), Natural Science Foundation of China (61876043, 61976052), Science and Technology Planning Project of Guangzhou (201902010058), Guangdong Provincial Science and Technology Innovation Strategy Fund (2019B121203012). Wei Chen was supported by China Postdoctoral Science Foundation (2021M690734).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Chen.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, R., Huang, L., Chen, W. et al. Learning dynamic causal mechanisms from non-stationary data. Appl Intell 53, 5437–5448 (2023). https://doi.org/10.1007/s10489-022-03843-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03843-3

Keywords

Navigation