Abstract
Telemedicine applications have been recently evolved to allow patients in underdeveloped areas to receive medical services. Meanwhile, machine learning (ML) techniques have been widely adopted in such telemedicine applications to help in disease diagnosis. The performance of these ML techniques, however, are limited by the fact that attackers can manipulate clean data to fool the model classifier and break the truthfulness and robustness of these models. For instance, due to attacks, a benign sample can be treated as a malicious one by the classifier and vice versa. Motivated by this, this paper aims at exploring this issue for telemedicine applications. Particularly, this paper studies the impact of adversarial attacks on mammographic image classifier. First, mamographic images are used to train and evaluate the accuracy of our proposed model. The original dataset is then poisoned to generate adversarial samples that can mislead the model. For this, structural similarity index is used to determine similarity between clean images and adversarial images. Results show that adversarial attacks can severely fool the ML model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Contrast. https://en.wikipedia.org/wiki/Contrast
Database. http://marathon.csee.usf.edu/Mammography/Database.html
Google machine learning project. https://www.mercurynews.com/2017/03/03/google-computers-trained-to-detect-cancer/
Google’s deepmind. https://deepmind.com/applied/deepmind-health/
Luminance. https://en.wikipedia.org/wiki/Luminance
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 16), pp. 265–283 (2016)
Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)
Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Sign. Process. Mag. 29(6), 141–142 (2012)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Guo, R., Shi, H., Zheng, D., Jing, C., Zhuang, C., Wang, Z.: Flexible and efficient blockchain-based abe scheme with multi-authority for medical on demand in telemedicine system. IEEE Access 7, 88012–88025 (2019)
Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P., Soatto, S.: Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554 (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015)
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)
Rajkomar, A., et al.: Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 1(1), 18 (2018)
Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–32 (2016)
Sahiner, B., et al.: Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans. Med. Imaging 15(5), 598–610 (1996)
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9 (2015)
Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Sign. Process. Lett. 9(3), 81–84 (2002)
Zhao, Z., Dua, D., Singh, S.: Generating natural adversarial examples. arXiv preprint arXiv:1710.11342 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Yilmaz, I., Baza, M., Amer, R., Rasheed, A., Amsaad, F., Morsi, R. (2021). On the Assessment of Robustness of Telemedicine Applications against Adversarial Machine Learning Attacks. In: Fujita, H., Selamat, A., Lin, J.CW., Ali, M. (eds) Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. IEA/AIE 2021. Lecture Notes in Computer Science(), vol 12798. Springer, Cham. https://doi.org/10.1007/978-3-030-79457-6_44
Download citation
DOI: https://doi.org/10.1007/978-3-030-79457-6_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79456-9
Online ISBN: 978-3-030-79457-6
eBook Packages: Computer ScienceComputer Science (R0)