skip to main content
10.1145/3615889.3628509acmconferencesArticle/Chapter ViewAbstractPublication PagesgisConference Proceedingsconference-collections
research-article

On the Effects of Filtering Methods on Adversarial Timeseries Data

Published: 21 November 2023 Publication History

Abstract

Adversarial machine learning is very well studied in image classification. On the other hand, other domains such as deep timeseries classification have not received similar levels of attention, leaving them disproportionately vulnerable. Specifically, adversarial defenses for deep timeseries classifiers have only been investigated in the context of attack detection. However, the proposed methods do not perform well and fail to generalize across attacks, affecting their real-world applicability. In this work we investigate adversarial defense via input data purification for deep timeseries classifiers. We subject clean and adversarially-perturbed univariate timeseries data to 4 simple filtering methods with a view to establishing whether such methods may potentially be used as purification-based adversarial defenses. In experiments involving 5 publicly-available datasets, we identify and compare the benefits of various filtering techniques. Thereafter we discuss our results and provide directions for further investigation.

References

[1]
Mubarak G Abdu-Aguye, Walid Gomaa, Yasushi Makihara, and Yasushi Yagi. 2020. Detecting adversarial attacks in time-series data. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3092--3096.
[2]
Ahmed Aldahdooh, Wassim Hamidouche, Sid Ahmed Fezza, and Olivier Déforges. 2022. Adversarial example detection for DNN models: A review and experimental comparison. Artificial Intelligence Review 55, 6 (2022), 4403--4462.
[3]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (sp). IEEE, 39--57.
[4]
Yanping Chen, Eamonn Keogh, Bing Hu, Nurjahan Begum, Anthony Bagnall, Abdullah Mueen, and Gustavo Batista. 2015. The UCR time series classification archive. (2015).
[5]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Adversarial attacks on deep neural networks for time series classification. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--8.
[6]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[7]
Puneet Gupta and Esa Rahtu. 2019. Ciidefence: Defeating adversarial attacks by fusing class-specific image inpainting and image denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6708--6717.
[8]
Akinori Higashi, Minoru Kuribayashi, Nobuo Funabiki, Huy H Nguyen, and Isao Echizen. 2020. Detection of adversarial examples based on sensitivities to noise removal filter. In 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 1386--1391.
[9]
Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. 2018. Adversarial examples in the physical world. In Artificial intelligence safety and security. Chapman and Hall/CRC, 99--112.
[10]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[11]
Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 135--147.
[12]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[13]
Giorgio Ughini, Stefano Samele, and Matteo Matteucci. 2022. Trust-No-Pixel: A Remarkably Simple Defense against Adversarial Attacks Based on Massive Inpainting. In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--10.
[14]
Zhiguang Wang, Weizhong Yan, and Tim Oates. 2017. Time series classification from scratch with deep neural networks: A strong baseline. In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 1578--1585.

Cited By

View all
  • (2024)Feature Map Purification for Enhancing Adversarial Robustness of Deep Timeseries Classifiers2024 IEEE International Conference on Data Mining (ICDM)10.1109/ICDM59182.2024.00007(1-10)Online publication date: 9-Dec-2024

Index Terms

  1. On the Effects of Filtering Methods on Adversarial Timeseries Data

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    GeoPrivacy '23: Proceedings of the 1st ACM SIGSPATIAL International Workshop on Geo-Privacy and Data Utility for Smart Societies
    November 2023
    38 pages
    ISBN:9798400703515
    DOI:10.1145/3615889
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 21 November 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial
    2. timeseries
    3. defense
    4. filtering

    Qualifiers

    • Research-article

    Conference

    GeoPrivacy '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 5 of 8 submissions, 63%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)31
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Feature Map Purification for Enhancing Adversarial Robustness of Deep Timeseries Classifiers2024 IEEE International Conference on Data Mining (ICDM)10.1109/ICDM59182.2024.00007(1-10)Online publication date: 9-Dec-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media