Skip to main content

Adversarial Edit Attacks for Tree Data

  • Conference paper
  • First Online:
Intelligent Data Engineering and Automated Learning – IDEAL 2019 (IDEAL 2019)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11871))

  • 1600 Accesses

Abstract

Many machine learning models can be attacked with adversarial examples, i.e. inputs close to correctly classified examples that are classified incorrectly. However, most research on adversarial attacks to date is limited to vectorial data, in particular image data. In this contribution, we extend the field by introducing adversarial edit attacks for tree-structured data with potential applications in medicine and automated program analysis. Our approach solely relies on the tree edit distance and a logarithmic number of black-box queries to the attacked classifier without any need for gradient information.

We evaluate our approach on two programming and two biomedical data sets and show that many established tree classifiers, like tree-kernel-SVMs and recursive neural networks, can be attacked effectively.

Support by the Bielefeld Young Researchers Fund is gratefully acknowledged.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://gitlab.ub.uni-bielefeld.de/bpaassen/python-edit-distances.

  2. 2.

    http://joedsm.altervista.org/pythontreekernels.htm.

  3. 3.

    All implementations and experiments are available at https://gitlab.ub.uni-bielefeld.de/bpaassen/adversarial-edit-attacks.

References

  1. Aiolli, F., Da San Martino, G., Sperduti, A.: Extending tree kernels with topological information. In: Proceedings of ICANN, pp. 142–149 (2011)

    Google Scholar 

  2. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  3. Bille, P.: A survey on tree edit distance and related problems. Theor. Comput. Sci. 337(1), 217–239 (2005)

    Article  MathSciNet  Google Scholar 

  4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of IEEE Security and Privacy, pp. 39–57 (2017)

    Google Scholar 

  5. Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: Proceedings of SPW, pp. 1–7 (2018)

    Google Scholar 

  6. Dai, H., et al.: Adversarial attack on graph structured data. In: Proceedings of ICML, pp. 1115–1124 (2018)

    Google Scholar 

  7. Ebrahimi, J., Rao, A., Lowd, D., Dou, D.: HotFlip: white-box adversarial examples for text classification. In: Proceedings of ACL, pp. 31–36 (2018)

    Google Scholar 

  8. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of CVPR, pp. 1625–1634 (2018)

    Google Scholar 

  9. Gallicchio, C., Micheli, A.: Tree echo state networks. Neurocomputing 101, 319–337 (2013)

    Article  Google Scholar 

  10. Gisbrecht, A., Schleif, F.M.: Metric and non-metric proximity transformations at linear costs. Neurocomputing 167, 643–657 (2015)

    Article  Google Scholar 

  11. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of ICLR (2015)

    Google Scholar 

  12. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: Proceedings of ICLR (2018)

    Google Scholar 

  13. Paaßen, B.: Revisiting the tree edit distance and its backtracing: a tutorial. CoRR abs/1805.06869 (2018)

    Google Scholar 

  14. Paaßen, B., Gallicchio, C., Micheli, A., Hammer, B.: Tree edit distance learning via adaptive symbol embeddings. In: Proceedings of ICML, pp. 3973–3982 (2018)

    Google Scholar 

  15. Sperduti, A., Starita, A.: Supervised neural networks for the classification of structures. IEEE Trans. Neural Networks 8(3), 714–735 (1997)

    Article  Google Scholar 

  16. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017)

    Google Scholar 

  17. Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceedings of ICLR (2014)

    Google Scholar 

  18. Zhang, K., Shasha, D.: Simple fast algorithms for the editing distance between trees and related problems. SIAM J. Comput. 18(6), 1245–1262 (1989)

    Article  MathSciNet  Google Scholar 

  19. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of SIGKDD, pp. 2847–2856 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamin Paaßen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Paaßen, B. (2019). Adversarial Edit Attacks for Tree Data. In: Yin, H., Camacho, D., Tino, P., Tallón-Ballesteros, A., Menezes, R., Allmendinger, R. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2019. IDEAL 2019. Lecture Notes in Computer Science(), vol 11871. Springer, Cham. https://doi.org/10.1007/978-3-030-33607-3_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33607-3_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33606-6

  • Online ISBN: 978-3-030-33607-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics