Skip to main content

Fusion of Similarity Measures for Time Series Classification

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6679))

Abstract

Time series classification, due to its applications in various domains, is one of the most important data-driven decision tasks of artificial intelligence. Recent results show that the simple nearest neighbor method with an appropriate distance measure performs surprisingly well, outperforming many state-of-the art methods. This suggests that the choice of distance measure is crucial for time series classification. In this paper we shortly review the most important distance measures of the literature, and, as major contribution, we propose a framework that allows fusion of these different similarity measures in a principled way. Within this framework, we develop a hybrid similarity measure. We evaluate it in context of time series classification on a large, publicly available collection of 35 real-world datasets and we show that our method achieves significant improvements in terms of classification accuracy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1), 105–139 (1999)

    Article  Google Scholar 

  2. Buza, K., Nanopoulos, A., Schmidt-Thieme, L.: Time-Series Classification based on Individualised Error Prediction. In: International Conference on Computational Science and Engineering. IEEE, Los Alamitos (2010)

    Google Scholar 

  3. Chan, K., Fu, A.: Efficient time series matching by wavelets. In: 15th International Conference on Data Engineering, pp. 126–133 IEEE, Los Alamitos (1999)

    Google Scholar 

  4. Christen, P.: Automatic record linkage using seeded nearest neighbour and support vector machine classification. In: Proc. of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 151–159. ACM Press, New York (2008)

    Chapter  Google Scholar 

  5. Dietterich, T.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning 40(2), 139–157 (2000)

    Article  Google Scholar 

  6. Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. Proceedings of the VLDB Endowment 1(2), 1542–1552 (2008)

    Article  Google Scholar 

  7. Hastie, T., Tibshirani, R., Friedman, J.: The elements of statistical learning: data mining, inference, and prediction, ch.5 Springer, Heidelberg (2009)

    Book  MATH  Google Scholar 

  8. Keogh, E., Kasetty, S.: On the need for time series data mining benchmarks: A survey and empirical demonstration. Data Mining and Knowledge Discovery 7(4), 349–371 (2003)

    Article  MathSciNet  Google Scholar 

  9. Keogh, E., Shelton, C., Moerchen, F.: Workshop and challenge on time series classification (2007), http://www.cs.ucr.edu/~eamonn/SIGKDD2007TimeSeries.html

  10. Keogh, E.J., Pazzani, M.J.: Scaling up dynamic time warping for datamining applications. In: 6th ACM SIGKDD Int’l. Conf. on Knowledge Discovery and Data Mining, pp. 285–289. ACM,New York (2000)

    Google Scholar 

  11. Ratanamahatana, C., Keogh, E.: Everything you know about dynamic time warping is wrong. In: SIGKDD Int’l. Wshp. on Mining Temporal and Seq. Data (2004)

    Google Scholar 

  12. Ratanamahatana, C., Keogh, E.: Making time-series classification more accurate using learned constraints. In: SIAM Int’l. Conf. on Data Mining, pp. 11–22 (2004)

    Google Scholar 

  13. Rath, T.M., Manmatha, R.: Word image matching using dynamic time warping. In: Conference on Computer Vision and Pattern Recognition, vol. 2. IEEE, Los Alamitos (2003)

    Google Scholar 

  14. Romano, L., Buza, K., Giuliano, C., Schmidt-Thieme, L.: XMedia: Web People Search by Clustering with Machinely Learned Similarity Measures. In: 18th WWW Conference on 2nd Web People Search Evaluation Workshop, WePS 2009 (2009)

    Google Scholar 

  15. Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. Acoustics, Speech and Signal Processing 26(1), 43–49 (1978)

    Article  MATH  Google Scholar 

  16. Schuller, B., Reiter, S., Muller, R., Al-Hames, M., Lang, M., Rigoll, G.: Speaker independent speech emotion recognition by ensemble classification (2005)

    Google Scholar 

  17. Sonnenburg, S., Ratsch, G., Schafer, C., Scholkopf, B.: Large scale multiple kernel learning. The Journal of Machine Learning Research 7, 1531–1565 (2006)

    MathSciNet  MATH  Google Scholar 

  18. Ting, K., Witten, I.: Stacked generalization: when does it work?. In: 15th Int’l. Joint Conf. on Artifical Intelligence, vol. 2, pp. 866–871. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  19. Zhang, G., Berardi, V.: Time series forecasting with neural network ensembles: an application for exchange rate prediction. Journal of the Operational Research Society 52(6), 652–664 (2001)

    Article  MATH  Google Scholar 

  20. Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial intelligence 137(1-2), 239–263 (2002)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Buza, K., Nanopoulos, A., Schmidt-Thieme, L. (2011). Fusion of Similarity Measures for Time Series Classification. In: Corchado, E., Kurzyński, M., Woźniak, M. (eds) Hybrid Artificial Intelligent Systems. HAIS 2011. Lecture Notes in Computer Science(), vol 6679. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21222-2_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21222-2_31

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21221-5

  • Online ISBN: 978-3-642-21222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics