Abstract
IoT device is often associated with corresponding datasets, algorithms, and infrastructure. However, many potential threats exist in IoT basic infrastructure when deep learning algorithms are applied in these devices. Typically, a deep learning method is widely applied as the basic decision algorithm to classify the time series data, which is an important task in IoT data application. Nevertheless, they are vulnerable to adversarial examples, which bring potential risks to some fields such medical and security in which, a minor disturbing in the time series data could lead to wrong decision. In this paper, we show some white-box attack and random noise attack against time series data. Moreover, we show an adversarial example generated method which only changes one value of the original time series. To resist the adversarial attack, we train an adversarial examples detector to differentiate the adversarial examples from normal examples based on deep features. The adversarial examples detector could filter the adversarial examples before future impair happening. Experiments on UCR data sets show 97% of adversarial examples could be successfully detected generated by two common attack methods: FGSM and BIM.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mohammadi, M., Al-Fuqaha, A., Sorour, S., Guizani, M.: Deep learning for IoT big data and streaming analytics: a survey. IEEE Commun. Surv. Tutor. 20(4), 2923–2960 (2018)
Li, H., Ota, K., Dong, M.: Learning IoT in edge: deep learning for the internet of things with edge computing. IEEE Netw. 32(1), 96–101 (2018)
Li, H., Yu, B., Zhao, T.: An anomaly pattern detection method for sensor data. In: Ni, W., Wang, X., Song, W., Li, Y. (eds.) WISA 2019. LNCS, vol. 11817, pp. 270–281. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30952-7_28
Fawaz, H.I., Forestier, G., Weber, J., Idoumghar, L., Muller, P.-A.: Adversarial attacks on deep neural networks for time series classification (2019)
Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.-A.: Deep learning for time series classification: a review. Data Min. Knowl. Discov. 33(4), 917–963 (2019). https://doi.org/10.1007/s10618-019-00619-1
Abdelfattah, S.M., Abdelrahman, G.M., Wang, M.: Augmenting the size of EEG datasets using generative adversarial networks. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2018-July (2018)
Wang, Z., Yan, W., Oates, T.: Time series classification from scratch with deep neural networks: a strong baseline. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2017-May, pp. 1578–1585 (2017)
Nawrocka, A., Lamorsk, J.: Determination of food quality by using spectroscopic methods. In: Advances in Agrophysical Research (2013)
Zheng, Z., Yang, Y., Niu, X., Dai, H.N., Zhou, Y.: Wide and deep convolutional neural networks for electricity-theft detection to secure smart grids. IEEE Trans. Ind. Inf. 14(4), 1606–1615 (2018)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 1–20 (2019)
Dziugaite, G.K., Ghahramani, Z., Roy, D.M.: A study of the effect of JPG compression on adversarial images (2016)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, pp. 1378–1387 (2017)
Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples (2014)
Ros, A.S., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 1660–1669 (2018)
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks (2018)
Buckman, J., Roy, A., Raffel, C., Goodfellow, I.: Thermometer encoding: one hot way to resist adversarial examples. ICLR 19(1), 92–97 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. Comput. Vis. Pattern Recognit. 770–778 (2016)
Briandet, R., Kemsley, E.K., Wilson, R.H.: Discrimination of Arabica and Robusta in instant coffee by Fourier transform infrared spectroscopy and chemometrics. J. Agric. Food Chem. 44(1), 170–174 (1996)
Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07–12-June, pp. 427–436 (2015)
Dau, H.A., et al.: The UCR time series classification archive. arXiv (2018)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016)
Acknowledgment
This work is supported by National Key R&D Plan (No: 2018YFB1402500), the Research Start-up Fund of North China University of technology (110051360002), General Project of Science and Technology Plan of Beijing Education Commission (No. KM201810009004).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, Z., Li, H., Zhang, M., Wang, J., Liu, C. (2020). A Method for Resisting Adversarial Attack on Time Series Classification Model in IoT System. In: Wang, G., Lin, X., Hendler, J., Song, W., Xu, Z., Liu, G. (eds) Web Information Systems and Applications. WISA 2020. Lecture Notes in Computer Science(), vol 12432. Springer, Cham. https://doi.org/10.1007/978-3-030-60029-7_50
Download citation
DOI: https://doi.org/10.1007/978-3-030-60029-7_50
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60028-0
Online ISBN: 978-3-030-60029-7
eBook Packages: Computer ScienceComputer Science (R0)