skip to main content
10.1145/3126686.3126724acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Reconstructable and Interpretable Representations for Time Series with Time-Skip Sparse Dictionary Learning

Published: 23 October 2017 Publication History

Abstract

It is challenging to summarize time series signals into essential patterns that preserve the original characteristics of the signals. Good summarization allows one to reconstruct the original signal back while the reduced data size saves storage space and in turn accelerates processing that follows. This paper proposes a dictionary learning method for time series signals with a mechanism of skipping sparse codes along the time axis, utilizing redundancy in time. The proposed method gives compact and accurate representations of time series. Experimental results demonstrate that low errors in both signal reconstruction and classification are achieved by the proposed method while the size of representations is reduced. The degradation of the signal reconstruction errors caused by the proposed skipping mechanism was about 5% of the error magnitude, with about a 18 times fewer representation size. The accuracy of classification based on the proposed methods is always better than the state-of-the-art dictionary learning method for time series. The proposed idea can be an effective option when using dictionary learning, which is one of the fundamental techniques in signal processing and has various applications.

References

[1]
Anthony Bagnall, Jason Lines, Jon Hills, and Aaron Bostrom. 2015. Time-series classification with COTE: The collective of transformation-based ensembles. IEEE Trans. Knowl. Data Eng. Vol. 27, 9 (2015), 2522--2535.
[2]
Q. Barthélemy, C. Gouy-Pailler, Y. Isaac, A. Souloumiac, A. Larue, and J. I. Mars. 2013. Multivariate temporal dictionary learning for EEG. J. Neurosci. Methods Vol. 215 (2013), 19--28.
[3]
Gustavo E. A. P. A. Batista, Xiaoyue Wang, and Eamonn J. Keogh. 2011. A complexity-invariant distance measure for time series SIAM Int. Conf. Data Min. (SDM).
[4]
Mustafa Gokce Baydogan, George Runger, and Eugene Tuv. 2013. A bag-of-features framework to classify time series. IEEE Trans. Pattern Anal. Mach. Intell. Vol. 35, 11 (2013), 2796--2802.
[5]
Thomas Blumensath and Mike Davies. 2006. Sparse and shift-invariant representations of music. IEEE Trans. Audio Speech Lang. Process. Vol. 14, 1 (2006), 50--57.
[6]
Yanping Chen, Eamonn Keogh, Bing Hu, Nurjahan Begum, Anthony Bagnall, Abdullah Mueen, and Gustavo Batista. 2015. The UCR Time Series Classification Archive. (2015). http://www.cs.ucr.edu/~eamonn/time_series_data/
[7]
Hong Cheng, Zhongjun Dai, and Zicheng Liu. 2013. Image-to-class dynamic time warping for 3D hand gesture recognition Int. Conf. Multimed. Expo (ICME).
[8]
Zhicheng Cui, Wenlin Chen, and Yixin Chen. 2016. Multi-scale convolutional neural networks for time series classification. (2016). {arxiv}cs/1603.06995
[9]
Hui Ding, Goce Trajcevski, Peter Scheuermann, Xiaoyue Wang, and Eamonn Keogh. 2008. Querying and mining of time series data: experimental comparison of representations and distance measures. Proc. VLDB Endowment, Vol. 1, 2 (2008), 1542--1552.
[10]
Michael Elad. 2010. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer.
[11]
Michael Elad and Michal Aharon. 2006. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. Vol. 15, 12 (2006), 3736--3745.
[12]
Michael Elad, Roman Goldenberg, and Ron Kimmel. 2007. Low bit-rate compression of facial images. IEEE Trans. Image Process. Vol. 16, 9 (2007), 2379--2383.
[13]
Josif Grabocka, Nicolas Schilling, Martin Wistuba, and Lars Schmidt-Thieme. 2014. Learning time-series shapelets. In ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD).
[14]
Roger Grosse, Rajat Raina, Helen Kwong, and Andrew Y. Ng. 2007. Shift-invariant sparse coding for audio classification Conf. Uncertain. Artif. Intell. (UAI).
[15]
Ke Huang and Selin Aviyente. 2006. Sparse representation for signal classification. Adv. Neural Inf. Process. Syst. (NIPS).
[16]
Jessica Lin and Yuan Li. 2009. Finding structural similarity in time series data using bag-of-patterns representation Int. Conf. Sci. Stat. Database Manag. (SSDBM).
[17]
Zhao Liu, Yuwei Wu, Junsong Yuan, and Yap-peng Tan. 2016. Learning a multi-class discriminative dictionary with nonredundancy constraints for visual classification. In ACM Multimed. Conf. (MM).
[18]
Liangping Ma and Gregory Sternberg. 2015. Packet-based PSNR time series prediction for video teleconferencing Int. Conf. Multimed. Expo (ICME).
[19]
Julien Mairal. 2014. SPAMS: SPArse Modeling Software, v2.5. (2014). http://spams-devel.gforge.inria.fr/
[20]
Julien Mairal, Francis Bach, and Jean Ponce. 2014. Sparse modeling for image and vision processing. Found. Trends Comput. Graph. Vis. Vol. 8, 2-3 (2014), 85--283.
[21]
Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. 2009. Online dictionary learning for sparse coding. In Int. Conf. Mach. Learn. (ICML).
[22]
Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. 2010. Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. Vol. 11 (2010), 19--60.
[23]
Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, and Francis Bach. 2009. Supervised dictionary learning. In Adv. Neural Inf. Process. Syst. (NIPS).
[24]
Hiroshi Morioka, Atsunori Kanemura, Jun-ichiro Hirayama, Manabu Shikauchi, Takeshi Ogawa, Shigeyuki Ikeda, Motoaki Kawanabe, and Shin Ishii. 2015. Learning a common dictionary for subject-transfer decoding with resting calibration. NeuroImage Vol. 111 (2015), 167--178.
[25]
Takao Murakami, Atsunori Kanemura, and Hideitsu Hino. 2015. Group sparsity tensor factorization for de-anonymization of mobility traces IEEE Int. Conf. Trust Secur. Priv. Comput. Commun. (TrustCom).
[26]
Takao Murakami, Atsunori Kanemura, and Hideitsu Hino. 2017. Group sparsity tensor factorization for re-identification of open mobility traces. IEEE Trans. Inf. Forensics Secur. Vol. 12, 3 (2017), 689--704.
[27]
Gaurav N. Pradhan and B. Prabhakaran. 2009. Association rule mining in multiple, multidimensional time series medical data Int. Conf. Multimed. Expo (ICME).
[28]
Regunathan Radhakrishnan, Ziyou Xiong, Ajay Divakaran, and Takashi Kan. 2004. Time series analysis and segmentation using eigenvectors for mining semantic audio label sequences. In Int. Conf. Multimed. Expo (ICME).
[29]
Chotirat Ann Ratanamahatana and Eamonn Keogh. 2004. Making time-series classification more accurate using learned constraints SIAM Int. Conf. Data Min. (SDM).
[30]
Patrick Schäfer. 2015. The BOSS is concerned with time series classification in the presence of noise. Data Min. Knowl. Discov. Vol. 29, 6 (2015), 1505--1530.
[31]
Pavel Senin and Sergey Malinchik. 2013. SAX-VSM: Interpretable time series classification using SAX and vector space model IEEE Int. Conf. Data Min. (ICDM).
[32]
Hui-Hung Wang, Yi-Ling Chen, and Chen-Kuo Chiang. 2016. Discriminative paired dictionary learning for visual recognition ACM Multimed. Conf. (MM).
[33]
John Wright, Allen Y. Yang, Arvind Ganesh, S. Sankar Sastry, and Yi Ma. 2009. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. Vol. 31, 2 (2009), 210--227.
[34]
Zhengzheng Xing, Jian Pei, and Eamonn Keogh. 2010. A brief survey on sequence classification. ACM SIGKDD Explor. Newsl. Vol. 12, 1 (2010), 40--48.
[35]
Xi-SHun Xu. 2016. Dictionary learning based hashing for cross-modal retrieval ACM Multimed. Conf. (MM).
[36]
Lexiang Ye and Eamonn Keogh. 2009. Time series shapelets: a new primitive for data mining ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD).
[37]
Ming Yuan and Yi Lin. 2006. Model selection and estimation in regression with grouped variables. J. R. Statist. Soc. B Vol. 68, 1 (2006), 49--67.
[38]
Guoqing Zheng, Yiming Yang, and Carbonell Jaime. 2016. Efficient shift-invariant dictionary learning. In ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD).

Index Terms

  1. Reconstructable and Interpretable Representations for Time Series with Time-Skip Sparse Dictionary Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      Thematic Workshops '17: Proceedings of the on Thematic Workshops of ACM Multimedia 2017
      October 2017
      558 pages
      ISBN:9781450354165
      DOI:10.1145/3126686
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 October 2017

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. classification
      2. dictionary learning
      3. interpretability
      4. reconstruction
      5. times series

      Qualifiers

      • Research-article

      Funding Sources

      • JSPS KAKENHI
      • New Energy and Industrial Technology Development Organization

      Conference

      MM '17
      Sponsor:
      MM '17: ACM Multimedia Conference
      October 23 - 27, 2017
      California, Mountain View, USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 170
        Total Downloads
      • Downloads (Last 12 months)11
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 28 Feb 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media