We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Generating Invariance-Based Adversarial Examples: Bringing Humans Back into the Loop | SpringerLink
Skip to main content

Generating Invariance-Based Adversarial Examples: Bringing Humans Back into the Loop

  • Conference paper
  • First Online:
Image Analysis and Processing - ICIAP 2023 Workshops (ICIAP 2023)

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Included in the following conference series:

  • 158 Accesses

Abstract

One of the major challenges in computer vision today is to align human and computer vision. Using an adversarial machine learning perspective, we investigate invariance-based adversarial examples, which highlight differences between computer vision and human perception. We conduct a study with 25 human subjects, collecting eye-gazing data and time-constrained classification performance, in order to study how occlusion-based perturbations impact human and machine performance on a classification task. Subsequently, we propose two adaptive methods to generate invariance-based adversarial examples, one based on occlusion and the other based on second picture patch-insertion. All methods leverage the eye-tracking data obtained from our experiments. Our results suggest that invariance-based adversarial examples are possible even for complex data sets but must be crafted with adequate diligence. Further research in this direction might help better align computer and human vision.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning, pp. 274–283. PMLR (2018)

    Google Scholar 

  2. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018)

    Article  Google Scholar 

  3. Cadieu, C., et al.: Deep neural networks rival the representation of primate it cortex for core visual object recognition. PLoS Computa. Biol. 10, e1003963 (2014)

    Article  Google Scholar 

  4. Carrasco, M.: Visual attention: the past 25 years. Vision. Res. 51, 1484–1525 (2011)

    Article  Google Scholar 

  5. Carter, B., Jain, S., Mueller, J.W., Gifford, D.: Overinterpretation reveals image classification model pathologies. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  6. Cohen, J., Rosenfeld, E., Kolter, Z.: Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, pp. 1310–1320. PMLR (2019)

    Google Scholar 

  7. Dalvi, N., Domingos, P., Sanghai, S., Verma, D.: Adversarial classification. In: Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108 (2004)

    Google Scholar 

  8. Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019)

    Article  Google Scholar 

  9. Elsayed, G.F., et al.: Adversarial examples that fool both computer vision and timelimited humans. In: Advances in Neural Information Processing Systems, pp. 3910–3920 (2018)

    Google Scholar 

  10. Engstrom, L., Tsipras, D., Schmidt, L., Madry, A.: A rotation and a translation suffice: Fooling CNNs with simple transformations. arXiv:1712.02779 (2017)

  11. Geirhos, R., Janssen, D.H.J., Schütt, H.H., Rauber, J., Bethge, M., Wichmann, F.A.: Comparing deep neural networks against humans: object recognition when the signal gets weaker (2018)

    Google Scholar 

  12. Guyton, A.C., Hall, J.E.: Textbook of Medical Physiology. Elsevier Inc. (2006)

    Google Scholar 

  13. Howard, A., et al.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)

    Google Scholar 

  14. Howard, J.: Imagenette (2019). https://github.com/fastai/imagenette/

  15. Maximilian, R., Tomaso, P.: Hierarchical models of object recognition in cortex 2. Nat. Neurosci. 2, 1019–1025 (1999). https://doi.org/10.1038/14819

  16. McCoy, R.T., Pavlick, E., Linzen, T.: Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference. In: 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, pp. 3428–3448. Association for Computational Linguistics (ACL) (2020)

    Google Scholar 

  17. Rauter, R., Nocker, M., Merkle, F., Schöttle, P.: On the effect of adversarial training against invariance-based adversarial examples. arXiv preprint arXiv:2302.08257 (2023)

  18. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  19. Sagawa, S., Koh, P.W., Hashimoto, T.B., Liang, P.: Distributionally robust neural networks for group shifts: on the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731 (2019)

  20. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision 128(2), 336–359 (2020). https://doi.org/10.1007/s11263-019-01228-7

    Article  Google Scholar 

  21. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)

    Google Scholar 

  22. Tobii Pro AB: Tobii Studio User’s Manual. Danderyd, Stockholm (2016). http://www.tobiipro.com/

  23. Tramèr, F., Behrmann, J., Carlini, N., Papernot, N., Jacobsen, J.H.: Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations. In: 37th International Conference on Machine Learning 2020, pp. 9503–9513 (2020)

    Google Scholar 

  24. Tramer, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. Adv. Neural. Inf. Process. Syst. 33, 1633–1645 (2020)

    Google Scholar 

  25. Van Essen, D., Anderson, C., Felleman, D.: Information processing in the primate visual system: an integrated systems perspective. Science 255(5043), 419–423 (1992). https://doi.org/10.1126/science.1734518

    Article  Google Scholar 

  26. Yamins, D., DiCarlo, J.: Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 19, 356–365 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

Florian Merkle and Pascal Schöttle are supported by the Austrian Science Fund (FWF) under grant no. I 4057-N31 (“Game Over Eva(sion)”). Martin Nocker is supported under the project “Secure Machine Learning Applications with Homomorphically Encrypted Data” (project no. 886524) by the Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK) of Austria.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Merkle .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Merkle, F., Sirbu, M.R., Nocker, M., Schöttle, P. (2024). Generating Invariance-Based Adversarial Examples: Bringing Humans Back into the Loop. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing - ICIAP 2023 Workshops. ICIAP 2023. Lecture Notes in Computer Science, vol 14365. Springer, Cham. https://doi.org/10.1007/978-3-031-51023-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-51023-6_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-51022-9

  • Online ISBN: 978-3-031-51023-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics