Skip to main content

Advertisement

Log in

Discovering excitatory relationships using dynamic Bayesian networks

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and human–computer interaction modeling. In this paper, we introduce the notion of excitatory networks which are essentially temporal models where all connections are stimulative, rather than inhibitive. The emphasis on excitatory connections facilitates learning of network models by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual information relationships and that such relationships can be summarized into a dynamic Bayesian network (DBN). This leads to an algorithm that is significantly faster than state-of-the-art methods for inferring DBNs, while simultaneously providing theoretical guarantees on network optimality. We demonstrate the advantages of our approach through an application in neuroscience, where we show how strong excitatory networks can be efficiently inferred from both mathematical models of spiking neurons and several real neuroscience datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bromberg F, Margaritis D, Honavar V (2009) Efficient markov network structure discovery using independence tests. J Artif Int Res 35(1): 449–484

    MathSciNet  MATH  Google Scholar 

  2. Chickering DM (2003) Optimal structure identification with greedy search. J Mach Learn Res 3: 507–554

    MathSciNet  MATH  Google Scholar 

  3. Chow C, Liu C (1968) Approximating discrete probability distributions with dependence trees. IEEE Trans Inf Theory 14(3): 462–467

    Article  MathSciNet  MATH  Google Scholar 

  4. Cooper GF, Herskovits E (1992) A bayesian method for the induction of probabilistic networks from data. Mach Learn 9: 309–347

    MATH  Google Scholar 

  5. Czanner G, Eden UT, Wirth S, Yanike M, Suzuki WA, Brown EN (2008) Analysis of between-trial and within-trial neural spiking dynamics. J Neurophysiol 99(5): 2672–2693

    Article  Google Scholar 

  6. Eldawlatly S, Zhou Y, Jin R, Oweiss KG (2010) On the use of dynamic bayesian networks in reconstructing functional neuronal networks from spike train ensembles. Neural Comput 22(1): 158–189

    Article  MathSciNet  MATH  Google Scholar 

  7. Friedman N, Murphy K, Russell S (1998) Learning the structure of dynamic probabilistic networks. In: Proceedings of the UAI’98. Morgan Kaufmann, pp 139–147

  8. Friedman N, Nachman I, Pe’er D (1999) Learning bayesian network structure from massive datasets: the “Sparse Candidate” algorithm. In: 5th conference on uncertainty in artificial intelligence UAI (1999). pp 206–215

  9. Izhikevich EM (2006) Polychronization: computation with spikes. Neural Comput 18(2): 245–282

    Article  MathSciNet  MATH  Google Scholar 

  10. Jordan, MI (ed) (1998) Learning in graphical models. MIT Press, Cambridge

    MATH  Google Scholar 

  11. Laxman S (2006) Discovering frequent episodes: fast algorithms, Connections with HMMs and generalizations. PhD thesis, IISc, Bangalore, India

  12. Laxman S, Sastry PS, Unnikrishnan KP (2005) Discovering frequent episodes and learning hidden markov models: a formal connection. IEEE TKDE 17(11): 1505–1517

    Google Scholar 

  13. Mannila H, Toivonen H, Verkamo A (1997) Discovery of frequent episodes in event sequences. Data Min Knowl Discov 1(3): 259–289

    Article  Google Scholar 

  14. Meila M (1999) An accelerated chow and liu algorithm: fitting tree distributions to high-dimensional sparse data. In: Proceedings of the ICML’99. pp 249–257

  15. Murphy K (2002) Dynamic Bayesian Networks: representation, inference and learning. PhD thesis, University of California, Berkeley, CA, USA

  16. Papapetrou P et al (2009) Mining frequent arrangements of temporal intervals. Knowl Inf Syst 21(2): 133–171

    Article  Google Scholar 

  17. Patnaik D, Sastry PS, Unnikrishnan KP (2007) Inferring neuronal network connectivity from spike data: a temporal data mining approach. Sci Program 16(1): 49–77

    Google Scholar 

  18. Pavlov D, Mannila H, Smyth P (2003) Beyond independence: probabilistic models for query approximation on binary transaction data. IEEE TKDE 15(6): 1409–1421

    Google Scholar 

  19. Raajay V (2009) Frequent episode mining and multi-neuronal spike train data analysis. Master’s thesis, IISc, Bangalore

  20. Rieke F, Warland D, Steveninck R, Bialek W (1999) Spikes: exploring the neural code. MIT Press, Cambridge

    Google Scholar 

  21. Sastry PS, Unnikrishnan KP (2010) Conditional probability based significance tests for sequential patterns in multi-neuronal spike trains. Neural Comput 22(2): 1025–1059

    Article  MATH  Google Scholar 

  22. Seppanen JK (2006) Using and extending itemsets in data mining: query approximation, dense itemsets and tiles. PhD thesis, Helsinki University of Technology

  23. Sprekeler H, Michaelis C, Wiskott L (2007) Slowness: an objective for spike-timing-dependent plasticity? PLoS Comput Biol 3(6):e112. doi:10.1371/journal.pcbi.0030112

  24. Takahashi N et al (2007) Watching neuronal circuit dynamics through functional multineuron calcium imaging (fmci). Neurosci Res 58(3): 219–225

    Article  Google Scholar 

  25. Wagenaar DA, Pine J, Potter SM (2006) An extremely rich repertoire of bursting patterns during the development of cortical cultures. BMC Neurosci 7(1): 11 doi:10.1186/1471-2202-7-11

    Article  Google Scholar 

  26. Wang C, Parthasarathy S (2006) Summarizing itemset patterns using probabilistic models. In: Proceedings of the KDD’06. ACM, New York, NY, USA, pp 730–735

  27. Wang K, Zhang J, Shen F, Shi L (2008) Adaptive learning of dynamic bayesian networks with changing structures by detecting geometric structures of time series. Knowl Inf Syst 17(1): 121–133

    Article  Google Scholar 

  28. Wang T, Yang J (2010) A heuristic method for learning bayesian networks using discrete particle swarm optimization. Knowl Inf Syst 24(2): 269–281 doi:10.1007/s10115-009-0239-6

    Article  Google Scholar 

  29. Williamson J (2000) Approximating discrete probability distributions with bayesian networks. In: Proceedings of the international conference on AI in science & technology, Tasmania. pp 16–20

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debprakash Patnaik.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Patnaik, D., Laxman, S. & Ramakrishnan, N. Discovering excitatory relationships using dynamic Bayesian networks. Knowl Inf Syst 29, 273–303 (2011). https://doi.org/10.1007/s10115-010-0344-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-010-0344-6

Keywords

Navigation