skip to main content
10.1145/3534678.3539140acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Public Access

Towards Learning Disentangled Representations for Time Series

Published: 14 August 2022 Publication History

Abstract

Promising progress has been made toward learning efficient time series representations in recent years, but the learned representations often lack interpretability and do not encode semantic meanings by the complex interactions of many latent factors. Learning representations that disentangle these latent factors can bring semantic-rich representations of time series and further enhance interpretability. However, directly adopting the sequential models, such as Long Short-Term Memory Variational AutoEncoder (LSTM-VAE), would encounter a Kullback?Leibler (KL) vanishing problem: the LSTM decoder often generates sequential data without efficiently using latent representations, and the latent spaces sometimes could even be independent of the observation space. And traditional disentanglement methods may intensify the trend of KL vanishing along with the disentanglement process, because they tend to penalize the mutual information between the latent space and the observations. In this paper, we propose Disentangle Time-Series, a novel disentanglement enhancement framework for time series data. Our framework achieves multi-level disentanglement by covering both individual latent factors and group semantic segments. We propose augmenting the original VAE objective by decomposing the evidence lower-bound and extracting evidence linking factorial representations to disentanglement. Additionally, we introduce a mutual information maximization term between the observation space to the latent space to alleviate the KL vanishing problem while preserving the disentanglement property. Experimental results on five real-world IoT datasets demonstrate that the representations learned by DTS achieve superior performance in various tasks with better interpretability.

Supplemental Material

MP4 File
Promising progress has been made toward learning efficient time series representations in recent years, but the learned representations often lack interpretability and do not encode semantic meanings of the complex interactions of many latent factors. Learning representations that disentangle these latent factors can bring semantic-rich representations of time series and further enhance interpretability. In this video, we propose Disentangle Time-Series, a novel disentanglement enhancement framework for time series data. Our framework achieves multi-level disentanglement by covering both individual latent factors and group semantic segments. We propose augmenting the original VAE objective by decomposing the evidence lower-bound and extracting evidence linking factorial representations to disentanglement. Additionally, we introduce a mutual information maximization term between the observation space to the latent space to alleviate the KL vanishing problem while preserving the disentanglement property.

References

[1]
Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. 2014. Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 (2014).
[2]
Davide Anguita and et al. 2013. A public domain dataset for human activity recognition using smartphones. In Esann.
[3]
Shaojie Bai et al. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271 (2018).
[4]
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine learning (2010).
[5]
Shai Ben-David and Ruth Urner. 2014. Domain adaptation--can quantity compensate for quality? Annals of Mathematics and Artificial Intelligence (2014).
[6]
Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, et al. 2018. Understanding disentangling in beta-VAE. arXiv:1804.03599 (2018).
[7]
Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, and Zhifeng Hao. 2019. Learning disentangled semantic representation for domain adaptation. In IJCAI.
[8]
Ricky TQ Chen, Xuechen Li, Roger Grosse, and David Duvenaud. 2019. Isolating Sources of Disentanglement in VAEs. In NeurIPS.
[9]
Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Variational lossy autoencoder. arXiv:1611.02731 (2016).
[10]
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. arXiv preprint arXiv:1506.02216 (2015).
[11]
Vincent Fortuin and et al. 2018. SOM-VAE: Interpretable discrete representation learning on time series. arXiv preprint arXiv:1806.02199 (2018).
[12]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In ICML.
[13]
Xiaojie Guo and et al. 2020. Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement. In KDD.
[14]
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2016. beta-vae: Learning basic visual concepts with a constrained variational framework. (2016).
[15]
Geoffrey E Hinton and Richard S Zemel. 1994. Autoencoders, minimum description length, and Helmholtz free energy. NeurIPS (1994).
[16]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation (1997).
[17]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[18]
Jennifer R Kwapisz and et al. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter (2011).
[19]
Yuening Li, Zhengzhang Chen, Daochen Zha, Kaixiong Zhou, Haifeng Jin, Haifeng Chen, and Xia Hu. 2021. Automated Anomaly Detection via Curiosity- Guided Search and Self-Imitation Learning. IEEE Transactions on Neural Networks and Learning Systems (2021).
[20]
Yuening Li, Zhengzhang Chen, Daochen Zha, Kaixiong Zhou, Haifeng Jin, Haifeng Chen, and Xia Hu. 2021. Autood: Neural architecture search for outlier detection. In International Conference on Data Engineering (ICDE).
[21]
Yuening Li, Xiao Huang, Jundong Li, Mengnan Du, and Na Zou. 2019. Specae: Spectral autoencoder for anomaly detection in attributed networks. In International Conference on Information and Knowledge Management.
[22]
Jiayang Liu and et al. 2009. uWave: Accelerometer-based personalized gesture recognition and its applications. Pervasive and Mobile Computing (2009).
[23]
Chen Luo, Zhengzhang Chen, Lu-An Tang, Anshumali Shrivastava, Zhichun Li, Haifeng Chen, and Jieping Ye. 2018. TINET: Learning Invariant Networks via Knowledge Transfer. In KDD.
[24]
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2015. Adversarial autoencoders. arXiv preprint arXiv:1511.05644 (2015).
[25]
Arthur J Moss and et al. 1995. ECG T-wave patterns in genetically distinct forms of the hereditary long QT syndrome. Circulation (1995).
[26]
Sanjay Purushotham, Wilka Carvalho, Tanachat Nilanon, and Yan Liu. 2017. Variational Recurrent Adversarial Deep Domain Adaptation. In ICLR.
[27]
Stanislau Semeniuta and et al. 2017. A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390 (2017).
[28]
Huajie Shao and et al. 2020. Controlvae: Controllable variational autoencoder. In ICML.
[29]
Yujun Shen, Ceyuan Yang, Xiaoou Tang, and Bolei Zhou. 2020. Interfacegan: Interpreting the disentangled face representation learned by gans. TPAMI (2020).
[30]
Allan Stisen and et al. 2015. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In SenSys.
[31]
Eric Tzeng and et al. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).
[32]
Prince Zizhuang Wang and William Yang Wang. 2019. Riemannian normalizing flow on variational wasserstein autoencoder for text modeling. arXiv preprint arXiv:1904.02399 (2019).
[33]
Zirui Wang and et al. 2019. Characterizing and avoiding negative transfer. In CVPR.
[34]
Satosi Watanabe. 1960. Information theoretical analysis of multivariate correlation. IBM Journal of research and development (1960).
[35]
Garrett Wilson and et al. 2020. Multi-source deep domain adaptation with weak supervision for time-series sensor data. In KDD.
[36]
Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2019. Infovae: Balancing learning and inference in variational autoencoders. In AAAI.

Cited By

View all
  • (2025)Disentangled representational learning for anomaly detection in single-lead electrocardiogram signals using variational autoencoderComputers in Biology and Medicine10.1016/j.compbiomed.2024.109422184(109422)Online publication date: Jan-2025
  • (2024)Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series ForecastingSensors10.3390/s2414447324:14(4473)Online publication date: 10-Jul-2024
  • (2024)Disentangled Representation Learning for Robust Radar Inter-Pulse Modulation Feature Extraction and RecognitionRemote Sensing10.3390/rs1619358516:19(3585)Online publication date: 26-Sep-2024
  • Show More Cited By

Index Terms

  1. Towards Learning Disentangled Representations for Time Series

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
    August 2022
    5033 pages
    ISBN:9781450393850
    DOI:10.1145/3534678
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 August 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep generative model
    2. disentangled representation learning
    3. domain adaptation
    4. interpretable representation
    5. time series analysis

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    KDD '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Upcoming Conference

    KDD '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)793
    • Downloads (Last 6 weeks)75
    Reflects downloads up to 17 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Disentangled representational learning for anomaly detection in single-lead electrocardiogram signals using variational autoencoderComputers in Biology and Medicine10.1016/j.compbiomed.2024.109422184(109422)Online publication date: Jan-2025
    • (2024)Multiview Spatial-Temporal Meta-Learning for Multivariate Time Series ForecastingSensors10.3390/s2414447324:14(4473)Online publication date: 10-Jul-2024
    • (2024)Disentangled Representation Learning for Robust Radar Inter-Pulse Modulation Feature Extraction and RecognitionRemote Sensing10.3390/rs1619358516:19(3585)Online publication date: 26-Sep-2024
    • (2024)DisMouse: Disentangling Information from Mouse Movement DataProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676411(1-13)Online publication date: 13-Oct-2024
    • (2024)POND: Multi-Source Time Series Domain Adaptation with Information-Aware Prompt TuningProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671721(3140-3151)Online publication date: 25-Aug-2024
    • (2024)Enhancing Interpretability of Electrical Load Forecasting with Architecture OptimizationProceedings of the 2024 ACM Southeast Conference10.1145/3603287.3651198(217-222)Online publication date: 18-Apr-2024
    • (2024)Real-Time UAV Tracking Through Disentangled Representation With Mutual Information MaximizationIEEE Access10.1109/ACCESS.2024.343943212(135325-135337)Online publication date: 2024
    • (2023)A Co-training Approach for Noisy Time Series LearningProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3614759(3308-3318)Online publication date: 21-Oct-2023
    • (2023)On Hierarchical Disentanglement of Interactive Behaviors for Multimodal Spatiotemporal Data with IncompletenessProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599448(213-225)Online publication date: 6-Aug-2023
    • (2023)Incremental Causal Graph Learning for Online Root Cause AnalysisProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599392(2269-2278)Online publication date: 4-Aug-2023
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media