Abstract
In this study we present a new sparse polynomial regression mixture model for fitting time series. The contribution of this work is the introduction of a smoothing prior over component regression coefficients through a Bayesian framework. This is done by using an appropriate Student-t distribution. The advantages of the sparsity-favouring prior is to make model more robust, less independent on order p of polynomials and improve the clustering procedure. The whole framework is converted into a maximum a posteriori (MAP) approach, where the known EM algorithm can be applied offering update equations for the model parameters in closed forms. The efficiency of the proposed sparse mixture model is experimentally shown by applying it on various real benchmarks and by comparing it with the typical regression mixture and the K-means algorithm. The results are very promising.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)
McLachlan, G.M., Peel, D.: Finite Mixture Models. John Wiley & Sons, Inc., New York (2001)
Gaffney, S.J., Smyth, P.: Curve clustering with random effects regression mixtures. In: Bishop, C.M., Frey, B.J. (eds.) Proc. of the Ninth Intern. Workshop on Artificial Intelligence and Statistics (2003)
DeSarbo, W.S., Cron, W.L.: A maximum likelihood methodology for clusterwise linear regression. Journal of Classification 5(1), 249–282 (1988)
Chudova, D., Gaffney, S., Mjolsness, E., Smyth, P.: Mixture models for translation-invariant clustering of sets of multi-dimensional curves. In: Proc. of the Ninth ACM SIGKDD Intern. Conf. on Knowledge Discovery and Data Mining, Washington, DC, pp. 79–88 (2003)
Gaffney, S.J.: Probabilistic curve-aligned clustering and prediction with regression mixture models. Ph.D thesis, Department of Computer Science, University of California, Irvine (2004)
Blekas, K., Nikou, C., Galatsanos, N., Tsekos, N.V.: A regression mixture model with spatial constraints for clustering spatiotemporal data. Intern. Journal on Artificial Intelligence Tools (to appear)
Tipping, M.E.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research 1, 211–244 (2001)
Zhong, M.: A Variational method for learning Sparse Bayesian Regression. Neurocomputing 69, 2351–2355 (2006)
Schmolck, A., Everson, R.: Smooth Relevance Vector Machine: A smoothness prior extension of the RVM. Machine Learning 68(2), 107–135 (2007)
Seeger, M.: Bayesian Inference and Optimal Design for the Sparse Linear Model. Journal of Machine Learning Research 9, 759–813 (2008)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39, 1–38 (1977)
Keogh, E., Xi, X., Wei, L., Ratanamahatana, C.A.: The ucr time series classification/clustering homepage (2006), www.cs.ucr.edu/~eamonn/timeseriesdata/
Keogh, E.J., Pazzani, M.J.: Scaling up Dynamic Time Warping for Datamining Applications. In: 6th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 285–289 (2000)
Vlassis, N., Likas, A.: A greedy EM algorithm for Gaussian mixture learning. Neural Processing Letters 15, 77–87 (2001)
Williams, O., Blake, A., Cipolla, R.: Sparse Bayesian Learning for Efficient Visual Tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence 27(8), 1292–1304 (2005)
Antonini, G., Thiran, J.: Counting pedestrians in video sequences using trajectory clustering. IEEE Trans. on Circuits and Systems for Video Technology 16(8), 1008–1020 (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Blekas, K., Galatsanos, N., Likas, A. (2008). A Sparse Regression Mixture Model for Clustering Time-Series. In: Darzentas, J., Vouros, G.A., Vosinakis, S., Arnellos, A. (eds) Artificial Intelligence: Theories, Models and Applications. SETN 2008. Lecture Notes in Computer Science(), vol 5138. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87881-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-87881-0_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87880-3
Online ISBN: 978-3-540-87881-0
eBook Packages: Computer ScienceComputer Science (R0)