Skip to main content
Log in

Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method

  • Research
  • Published:
Journal of Grid Computing Aims and scope Submit manuscript

Abstract

Adversarial attacks exploit vulnerabilities or weaknesses in the model’s decision-making process to generate inputs that appear benign to humans but can lead to incorrect or unintended outputs from the model. Neural networks (NNs) are widely used for aerial detection, and increased usage has highlighted the vulnerability of DNNs to adversarial cases intentionally designed to mislead them. The majority of adversarial attacks now in use can only rarely deceive a black-box model. We employ the fast gradient sign technique (FGSM) to immediately enhance the position of an adversarial area to identify the target. We employ two open datasets in extensive experiments; however, the findings demonstrate that, on average, only 400 queries may successfully perturb at least one erroneous class in most of the photos in the test dataset. The proposed method can be used for both untargeted and targeted attacks, leading to incredible query efficiency in both scenarios. The experiment manipulates input images using gradients or noise to generate misclassified outputs. It is implemented in Python using the TensorFlow framework. The experiment optimizes performance by using an algorithm with an initial learning rate of 0.1 and adjusting the learning rate based on the number of training samples using different epoch values. Compared to other studies, our technique outperforms them in crafting adversaries and provides high accuracy. Moreover, this technique works effectively, needs a few lines of code to be implemented, and functions as a solid base for upcoming black-box attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Availability of Data and Materials

Data is available upon request.

References

  1. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)

  2. Wang, Q., Zhang, L., Bertinetto, L., Hu, W., Torr, P.H.: Fast online object tracking and segmentation: A unifying approach. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1328–1338 (2019)

  3. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  4. Xu, H., Ma, Y., Liu, H.-C., Deb, D., Liu, H., Tang, J.-L., Jain, A.K.: Adversarial attacks and defenses in images, graphs and text: A review. Int. J. Autom. Comput. 17, 151–178 (2020)

    Article  Google Scholar 

  5. Bai, Y., Zeng, Y., Jiang, Y., Wang, Y., Xia, S.-T., Guo, W.: Improving query efficiency of black-box adversarial attack. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, pp. 101–116 (2020). Springer

  6. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)

    Article  MathSciNet  Google Scholar 

  7. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

  8. Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., Zhong, C.: Interpretable machine learning: Fundamental principles and 10 grand challenges. Stat. Surv. 16, 1–85 (2022)

    Article  MathSciNet  Google Scholar 

  9. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  10. Jin, D., Jin, Z., Zhou, J.T., Szolovits, P.: TextFool: fool your model with natural adversarial text (2019)

  11. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)

  12. Lee, L., Rose, R.: A frequency warping approach to speaker normalization. IEEE Trans. Speech Audio Process. 6(1), 49–60 (1998)

    Article  Google Scholar 

  13. Gao, J., Yan, D., Dong, M.: On the robustness of speech emotion models to black-box adversarial attack (2022)

  14. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.L.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2730–2739 (2019)

  15. Zhang, M., Zhang, Y., Zhang, L., Liu, C., Khurshid, S.: Deeproad: Gan-based metamorphic testing and input validation framework for autonomous driving systems. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 132–142 (2018)

  16. Li, X., Li, F.: Adversarial examples detection in deep networks with convolutional filter statistics. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5764–5772 (2017)

  17. Feinman, R., Curtin, R.R., Shintre, S., Gardner, A.B.: Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410 (2017)

  18. Tian, S., Yang, G., Cai, Y.: Detecting adversarial examples through image transformation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

  19. Kwon, H., Lee, S.: Ensemble transfer attack targeting text classification systems. Comput. Secur. 117, 102695 (2022)

    Article  Google Scholar 

  20. Ma, X., Li, B., Wang, Y., Erfani, S.M., Wijewickrema, S., Schoenebeck, G., Song, D., Houle, M.E., Bailey, J.: Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613 (2018)

  21. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

  22. Mao, G., Li, L., Wang, Q., Li, J.: Study on the method of adversarial example attack based on mi-fgsm. In: Advances in Intelligent Information Hiding and Multimedia Signal Processing: Proceeding of the IIH-MSP 2021 & FITAT 2021, Kaohsiung, Taiwan, Volume 1, pp. 281–288. Springer, 978-981-19-1057-9 (2022)

  23. Yu, M., Sun, S.: Fe-dast: Fast and effective data-free substitute training for blackbox adversarial attacks. Comput. Secur. 113, 102555 (2022)

  24. Yang, W., Tan, R.T., Wang, S., Fang, Y., Liu, J.: Single image deraining: From model-based to data-driven and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 43(11), 4059–4077 (2020)

    Article  Google Scholar 

  25. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)

    Article  MathSciNet  Google Scholar 

  26. Zhang, H., Patel, V.M.: Density-aware single image de-raining using a multistream dense network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 695–704 (2018)

  27. Zhang, H., Sindagi, V., Patel, V.M.: Image de-raining using a conditional generative adversarial network. IEEE Trans. Circuits Syst. Video Technol. 30(11), 3943–3956 (2019)

    Article  Google Scholar 

  28. Liu, J., Zhang, Q., Mo, K., Xiang, X., Li, J., Cheng, D., Gao, R., Liu, B., Chen, K., Wei, G.: An efficient adversarial example generation algorithm based on an accelerated gradient iterative fast gradient. Comput. Stand. Interfaces 82, 103612 (2022)

    Article  Google Scholar 

  29. Lu, S., Wang, M., Wang, D., Wei, X., Xiao, S., Wang, Z., Han, N., Wang, L.: Black-box attacks against log anomaly detection with adversarial examples. Inf. Sci. 619, 249–262 (2023)

    Article  Google Scholar 

  30. Wang, J., Wang, C., Lin, Q., Luo, C., Wu, C., Li, J.: Adversarial attacks and defenses in deep learning for image recognition: A survey. Neurocomputing (2022)

  31. Wang, C., Wang, J., Lin, Q.: Adversarial attacks and defenses in deep learning: A survey. In: Intelligent Computing Theories and Application: 17th International Conference, ICIC 2021, Shenzhen, China, August 12–15, 2021, Proceedings, Part I 17, pp. 450–461 (2021). Springer

  32. Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P.: Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 (2016)

  33. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC, arXiv:1607.02533 (2018)

  34. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  35. Linzen, T., Chrupa la, G., Belinkov, Y., Hupkes, D.: Proceedings of the 2019 acl workshop blackboxnlp: Analyzing and interpreting neural networks for nlp. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (2019)

  36. Yin, X., Kolouri, S., Rohde, G.K.: Divide-and-conquer adversarial detection. CoRR, abs/1905.11475 arXiv:1905.11475 (2019)

  37. Shumailov, I., Zhao, Y., Mullins, R., Anderson, R.: Towards certifiable adversarial sample detection. In: Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security, pp. 13–24 (2020)

  38. Vacanti, G., Van Looveren, A.: Adversarial detection and correction by matching prediction distributions. arXiv preprint arXiv:2002.09364 (2020)

  39. Freitas, S., Chen, S.-T., Wang, Z.J., Chau, D.H.: Unmask: Adversarial detection and defense through robust feature alignment. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 1081–1088 (2020). IEEE

  40. v2, M.: Dataset. https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v2/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224.h5. [MobileNet v2] (2018)

  41. Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1357–1366 (2017)

  42. Xie, X., Ma, L., Wang, H., Li, Y., Liu, Y., Li, X.: Diffchaser: Detecting disagreements for deep neural networks. International Joint Conferences on Artificial Intelligence Organization (2019)

  43. Yang, W., Liu, J., Yang, S., Guo, Z.: Scale-free single image deraining via visibility-enhanced recurrent wavelet learning. IEEE Trans. Image Process. 28(6), 2948–2961 (2019)

    Article  MathSciNet  Google Scholar 

  44. Xie, X., Ma, L., Juefei-Xu, F., Xue, M., Chen, H., Liu, Y., Zhao, J., Li, B., Yin, J., See, S.: Deephunter: a coverage-guided fuzz testing framework for deep neural networks. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 146–157 (2019)

Download references

Funding

This research received no specific grant from any funding agency.

Author information

Authors and Affiliations

Authors

Contributions

S.M.A: Conceptualization, Methodology, Formal analysis, Supervision, Writing - original draft, Writing - review & editing. M.S: Investigation, Data Curation, Validation, Resources, Writing - review & editing. M.A.K: Project administration, Investigation, Writing - review & editing. S.I.H: Writing - original draft, Writing - review & editing.

Corresponding author

Correspondence to Syed Muhammad Ali Naqvi.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naqvi, S.M.A., Shabaz, M., Khan, M.A. et al. Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method. J Grid Computing 21, 52 (2023). https://doi.org/10.1007/s10723-023-09684-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10723-023-09684-9

Keywords