Skip to main content

Error Resiliency and Adversarial Robustness in Convolutional Neural Networks: An Empirical Analysis

  • Conference paper
  • First Online:
Internet of Things (IFIPIoT 2024)

Abstract

The increasing pervasiveness of Artificial Intelligence (AI), and Convolutional Neural Networks (CNNs) in edge-computing and Internet of Things applications pose several challenges, including the hunger for computational and power resources of predictive models, and their robustness w.r.t. security threats, e.g., adversarial attacks. As for the former, the approximate computing emerged as one of the most promising solutions to lower the computational effort of AI, since the output of approximate application is usually barely distinguishable from the exact one. Nevertheless, alterations to predictive models through approximation may actually jeopardize inner characteristics of CNNs, such as their adversarial robustness, that is their ability to discern legitimate inputs from systematically crafted malicious ones.

In this paper, we investigate the vulnerability of the approximate CNNs to adversarial attacks. Specifically, we target approximate CNNs while resorting to different adversarial attacks in an aversion scenario, and we empirically prove approximation may actually compromise adversarial robustness.

All the authors have contributed equally to this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. Technical report, The Google Brain Team (2015). https://www.tensorflow.org/

  2. Ahmadilivani, M.H., et al.: Special session: approximation and fault resiliency of DNN accelerators. In: 2023 IEEE 41st VLSI Test Symposium (VTS), pp. 1–10 (2023). https://doi.org/10.1109/VTS56346.2023.10140043. iSSN: 2375-1053

  3. Almurib, H.A., Kumar, T.N., Lombardi, F.: Approximate DCT image compression using inexact computing. IEEE Trans. Comput. 67(2), 149–159 (2018). https://doi.org/10.1109/TC.2017.2731770

  4. Ansari, M.S., Mrazek, V., Cockburn, B.F., Sekanina, L., Vasicek, Z., Han, J.: Improving the accuracy and hardware efficiency of neural networks using approximate multipliers. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 28(2), 317–328 (2020). https://doi.org/10.1109/TVLSI.2019.2940943

  5. Arjomandi, H.M., Khalooei, M., Amirmazlaghani, M.: Low-epsilon adversarial attack against a neural network online image stream classifier. Appl. Soft Comput. 110760 (2023). https://doi.org/10.1016/j.asoc.2023.110760. https://www.sciencedirect.com/science/article/pii/S1568494623007780

  6. Askarizadeh, M.J., Farahmand, E., Castro-Godinez, J., Mahani, A., Cabrera-Quiros, L., Salazar-Garcia, C.: Exploring DNN Robustness Against Adversarial Attacks Using Approximate Multipliers (2024). http://arxiv.org/abs/2404.11665. arXiv:2404.11665

  7. Barbareschi, M., Barone, S.: Investigating the resilience source of classification systems for approximate computing techniques. IEEE Trans. Emerg. Top. Comput. 01, 1–12 (2024). https://doi.org/10.1109/TETC.2024.3403757. https://www.computer.org/csdl/journal/ec/5555/01/10542568/1XmMGLjxlRe

  8. Barbareschi, M., Barone, S., Bosio, A., Han, J., Traiola, M.: A genetic-algorithm-based approach to the design of DCT hardware accelerators. ACM J. Emerg. Technol. Comput. Syst. 18(3), 1–25 (2022). https://doi.org/10.1145/3501772. https://dl.acm.org/doi/10.1145/3501772

  9. Barbareschi, M., Barone, S., Mazzocca, N.: Advancing synthesis of decision tree-based multiple classifier systems: an approximate computing case study. Knowl. Inf. Syst. 1–20 (2021). https://doi.org/10.1007/s10115-021-01565-5. https://link.springer.com/article/10.1007/s10115-021-01565-5

  10. Barbareschi, M., Barone, S., Mazzocca, N., Moriconi, A.: A catalog-based AIG-rewriting approach to the design of approximate components. IEEE Trans. Emerg. Top. Comput. (2022). https://doi.org/10.1109/TETC.2022.3170502

    Article  MATH  Google Scholar 

  11. Barone, S., Traiola, M., Barbareschi, M., Bosio, A.: Multi-objective application-driven approximate design method. IEEE Access 9, 86975–86993 (2021). https://doi.org/10.1109/ACCESS.2021.3087858

  12. Blank, J., Deb, K.: A running performance metric and termination criterion for evaluating evolutionary multi- and many-objective optimization algorithms. In: 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, United Kingdom, pp. 1–8. IEEE (2020). https://doi.org/10.1109/CEC48606.2020.9185546. https://ieeexplore.ieee.org/document/9185546/

  13. Bosio, A., Ménard, D., Sentieys, O. (eds.): Approximate Computing Techniques: From Component- to Application-Level. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-94705-7. https://link.springer.com/10.1007/978-3-030-94705-7

  14. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49. https://ieeexplore.ieee.org/document/7958570. iSSN: 2375-1207

  15. Chippa, V.K., Mohapatra, D., Roy, K., Chakradhar, S.T., Raghunathan, A.: Scalable effort hardware design. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 22(9), 2004–2016 (2014). https://doi.org/10.1109/TVLSI.2013.2276759

  16. Deveautour, B., Traiola, M., Virazel, A., Girard, P.: QAMR: an approximation-based fully reliable TMR alternative for area overhead reduction. In: 2020 IEEE European Test Symposium (ETS), pp. 1–6 (2020). https://doi.org/10.1109/ETS48528.2020.9131574. iSSN: 1558-1780

  17. Guesmi, A., et al.: Defensive approximation: securing CNNs using approximate computing. In: Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Virtual USA, pp. 990–1003. ACM (2021). https://doi.org/10.1145/3445814.3446747. https://dl.acm.org/doi/10.1145/3445814.3446747

  18. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian institute for advanced research) (2010). https://www.cs.toronto.edu/~kriz/cifar.html

  19. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2017). http://arxiv.org/abs/1607.02533. arXiv:1607.02533

  20. Li, J., Hu, Y., Xia, F.: A variable adversarial attack method based on filtering. Comput. Secur. 134, 103431 (2023). https://doi.org/10.1016/j.cose.2023.103431. https://www.sciencedirect.com/science/article/pii/S0167404823003413

  21. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards Deep Learning Models Resistant to Adversarial Attacks (2019). https://doi.org/10.48550/arXiv.1706.06083. http://arxiv.org/abs/1706.06083. arXiv:1706.06083

  22. Manikantta Reddy, K., Vasantha, M.H., Nithin Kumar, Y.B., Keshava Gopal, C., Dwivedi, D.: Quantization aware approximate multiplier and hardware accelerator for edge computing of deep learning applications. Integration 81, 268–279 (2021). https://doi.org/10.1016/j.vlsi.2021.08.001. https://www.sciencedirect.com/science/article/pii/S0167926021000869

  23. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 2574–2582. IEEE (2016). https://doi.org/10.1109/CVPR.2016.282. http://ieeexplore.ieee.org/document/7780651/

  24. Mrazek, V., Hrbacek, R., Vasicek, Z., Sekanina, L.: EvoApprox8b: library of approximate adders and multipliers for circuit design and benchmarking of approximation methods. In: Design, Automation Test in Europe Conference Exhibition (DATE), pp. 258–261 (2017). https://doi.org/10.23919/DATE.2017.7926993. iSSN: 1558-1101

  25. Mrazek, V., Vasicek, Z., Sekanina, L., Hanif, M.A., Shafique, M.: ALWANN: automatic layer-wise approximation of deep neural network accelerators without retraining. In: 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8 (2019). https://doi.org/10.1109/ICCAD45719.2019.8942068. http://arxiv.org/abs/1907.07229. arXiv: 1907.07229

  26. Mrazek, V., Vasicek, Z., Sekanina, L., Jiang, H., Han, J.: Scalable construction of approximate multipliers with formally guaranteed worst case error. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 26(11), 2572–2576 (2018). https://doi.org/10.1109/TVLSI.2018.2856362

  27. Nagel, M., Fournarakis, M., Bondarenko, Y., Blankevoort, T.: Overcoming oscillations in quantization-aware training. In: Proceedings of the 39th International Conference on Machine Learning, pp. 16318–16330. PMLR (2022). https://proceedings.mlr.press/v162/nagel22a.html. iSSN: 2640-3498

  28. Nowroozi, E., Mekdad, Y., Berenjestanaki, M.H., Conti, M., Fergougui, A.E.: Demystifying the transferability of adversarial attacks in computer networks. IEEE Trans. Netw. Serv. Manage. 19(3), 3387–3400 (2022). https://doi.org/10.1109/TNSM.2022.3164354. https://ieeexplore.ieee.org/document/9747933/

  29. Sarwar, S.S., Venkataramani, S., Ankit, A., Raghunathan, A., Roy, K.: Energy-efficient neural computing with approximate multipliers. ACM J. Emerg. Technol. Comput. Syst. 14(2), 1–23 (2018). https://doi.org/10.1145/3097264. https://dl.acm.org/doi/10.1145/3097264

  30. Scarabottolo, I., Ansaloni, G., Constantinides, G.A., Pozzi, L., Reda, S.: Approximate logic synthesis: a survey. Proc. IEEE 108(12), 2195–2213 (2020). https://doi.org/10.1109/JPROC.2020.3014430

  31. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015). https://doi.org/10.1016/j.neunet.2014.09.003. https://linkinghub.elsevier.com/retrieve/pii/S0893608014002135

  32. Siddique, A., Hoque, K.A.: Is approximation universally defensive against adversarial attacks in deep neural networks? In: 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 364–369 (2022). https://doi.org/10.23919/DATE54114.2022.9774563. iSSN: 1558-1101

  33. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019). https://doi.org/10.1109/TEVC.2019.2890858

  34. Szegedy, C., et al.: Intriguing properties of neural networks (2014). http://arxiv.org/abs/1312.6199. arXiv:1312.6199

  35. Tasoulas, Z.G., Zervakis, G., Anagnostopoulos, I., Amrouch, H., Henkel, J.: Weight-oriented approximation for energy-efficient neural network inference accelerators. IEEE Trans. Circ. Syst. I Regul. Pap. 67(12), 4670–4683 (2020). https://doi.org/10.1109/TCSI.2020.3019460

  36. Traiola, M., Echavarria, J., Bosio, A., Teich, J., O’Connor, I.: Design space exploration of approximation-based quadruple modular redundancy circuits. In: 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1–9 (2021). https://doi.org/10.1109/ICCAD51958.2021.9643561. iSSN: 1558-2434

  37. Xu, Q., Mytkowicz, T., Kim, N.S.: Approximate computing: a survey. IEEE Des. Test 33(1), 8–22 (2016). https://doi.org/10.1109/MDAT.2015.2505723

  38. Yang, X., Lin, J., Zhang, H., Yang, X., Zhao, P.: Improving the transferability of adversarial examples via direction tuning. Inf. Sci. 647, 119491 (2023). https://doi.org/10.1016/j.ins.2023.119491. https://www.sciencedirect.com/science/article/pii/S0020025523010769

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salvatore Della Torca .

Editor information

Editors and Affiliations

Copyright information

© 2025 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Barbareschi, M., Barone, S., Casola, V., Della Torca, S. (2025). Error Resiliency and Adversarial Robustness in Convolutional Neural Networks: An Empirical Analysis. In: Rey, G., Tigli, JY., Franquet, E. (eds) Internet of Things. IFIPIoT 2024. IFIP Advances in Information and Communication Technology, vol 737. Springer, Cham. https://doi.org/10.1007/978-3-031-81900-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-81900-1_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-81899-8

  • Online ISBN: 978-3-031-81900-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics