Skip to main content

On the Assessment of Robustness of Telemedicine Applications against Adversarial Machine Learning Attacks

  • Conference paper
  • First Online:
Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices (IEA/AIE 2021)

Abstract

Telemedicine applications have been recently evolved to allow patients in underdeveloped areas to receive medical services. Meanwhile, machine learning (ML) techniques have been widely adopted in such telemedicine applications to help in disease diagnosis. The performance of these ML techniques, however, are limited by the fact that attackers can manipulate clean data to fool the model classifier and break the truthfulness and robustness of these models. For instance, due to attacks, a benign sample can be treated as a malicious one by the classifier and vice versa. Motivated by this, this paper aims at exploring this issue for telemedicine applications. Particularly, this paper studies the impact of adversarial attacks on mammographic image classifier. First, mamographic images are used to train and evaluate the accuracy of our proposed model. The original dataset is then poisoned to generate adversarial samples that can mislead the model. For this, structural similarity index is used to determine similarity between clean images and adversarial images. Results show that adversarial attacks can severely fool the ML model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Contrast. https://en.wikipedia.org/wiki/Contrast

  2. Database. http://marathon.csee.usf.edu/Mammography/Database.html

  3. Google machine learning project. https://www.mercurynews.com/2017/03/03/google-computers-trained-to-detect-cancer/

  4. Google’s deepmind. https://deepmind.com/applied/deepmind-health/

  5. Luminance. https://en.wikipedia.org/wiki/Luminance

  6. Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 16), pp. 265–283 (2016)

    Google Scholar 

  7. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)

    Google Scholar 

  8. Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Sign. Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  10. Guo, R., Shi, H., Zheng, D., Jing, C., Zhuang, C., Wang, Z.: Flexible and efficient blockchain-based abe scheme with multi-authority for medical on demand in telemedicine system. IEEE Access 7, 88012–88025 (2019)

    Article  Google Scholar 

  11. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P., Soatto, S.: Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554 (2017)

  12. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  13. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015)

    Google Scholar 

  14. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)

    Google Scholar 

  15. Rajkomar, A., et al.: Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 1(1), 18 (2018)

    Article  Google Scholar 

  16. Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–32 (2016)

    Google Scholar 

  17. Sahiner, B., et al.: Classification of mass and normal breast tissue: a convolution neural network classifier with spatial domain and texture images. IEEE Trans. Med. Imaging 15(5), 598–610 (1996)

    Article  Google Scholar 

  18. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  19. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9 (2015)

    Google Scholar 

  20. Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Sign. Process. Lett. 9(3), 81–84 (2002)

    Article  Google Scholar 

  21. Zhao, Z., Dua, D., Singh, S.: Generating natural adversarial examples. arXiv preprint arXiv:1710.11342 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed Baza .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yilmaz, I., Baza, M., Amer, R., Rasheed, A., Amsaad, F., Morsi, R. (2021). On the Assessment of Robustness of Telemedicine Applications against Adversarial Machine Learning Attacks. In: Fujita, H., Selamat, A., Lin, J.CW., Ali, M. (eds) Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. IEA/AIE 2021. Lecture Notes in Computer Science(), vol 12798. Springer, Cham. https://doi.org/10.1007/978-3-030-79457-6_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-79457-6_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-79456-9

  • Online ISBN: 978-3-030-79457-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics