Abstract
In this paper we propose an algorithm for the on-line maintenance of the joint probability distribution of a data stream. The joint probability distribution is modeled by a mixture of low dependence Bayesian networks, and maintained by an on-line EM-algorithm. Modeling the joint probability function by a mixture of low dependence Bayesian networks is motivated by two key observations. First, the probability distribution can be maintained with time cost linear in the number of data points and constant time per data point. Whereas other methods like Bayesian networks have polynomial time complexity. Secondly, looking at the literature there is empirical indication [1] that mixtures of Naive-Bayes structures can model the data as accurate as Bayesian networks. In this paper we relax the constraints of the mixture model of Naive-Bayes structures to that of the mixture models of arbitrary low dependence structures. Furthermore we propose an on-line algorithm for the maintenance of a mixture model of arbitrary Bayesian networks. We empirically show that speed-up is achieved with no decrease in performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Lowd, D., Domingos, P.: Naive bayes model for probability estimation. In: Twenty-Second International Conference on Machine Learning, pp. 529–536 (2005)
Aggarwal, C.: Data Streams: Models and Algorithms. Springer, Heidelberg (2007)
Chow, C.K., Liu, C.N.: Approximating discrete probability distributions with dependence trees. Transactions on Information Theory, 462–467 (1968)
Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 1–38 (1977)
Sato, M.A., ishii, S.: On-line EM algorithm for the normalized gaussian network. Neural Computation 12(2), 407–432 (1999)
Bradley, P., Fayyad, U., Reina, C.: Scaling EM(expectation maximization) clustering to large databases. In: Technical Report MSR-TR-98-35, Microsoft Research (1998)
Friedman, N., Greiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning, 103–130 (1997)
Zhou, A., Cai, Z., Wei, L., Qian, W.: M-kernel merging: Towards density estimation over data streams. In: DASFAA 2003: Proceedings of the Eighth International Conference on Database Systems for Advanced Applications, pp. 285–292. IEEE Computer Society, Washington (2003)
Heinz, C., Seeger, B.: Wavelet density estimators over data streams. In: The 20th Annual ACM Symposium on Applied Computing (2005)
Thiesson, B., Meek, C., Heckerman, D.: Accelerating em for large databases. Machine Learning 45(3), 279–299 (2001)
Cooper, G., Herskovits, E.: A bayesian method for the induction of probabilistic networks from data. Machine Learning, 309–347 (1992)
Murphy (2004), http://www.cs.ubc.ca/~murphyk/software/bnt/bnt.html
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Patist, J.P. (2008). Fast Online Estimation of the Joint Probability Distribution. In: Washio, T., Suzuki, E., Ting, K.M., Inokuchi, A. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2008. Lecture Notes in Computer Science(), vol 5012. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-68125-0_65
Download citation
DOI: https://doi.org/10.1007/978-3-540-68125-0_65
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-68124-3
Online ISBN: 978-3-540-68125-0
eBook Packages: Computer ScienceComputer Science (R0)