Abstract
Word substitution based textual adversarial attack is actually a combinatorial optimization problem. Existing greedy search methods are time-consuming due to extensive unnecessary victim model calls in word ranking and substitution. In this work, we propose a learnable attack method which uses neural networks to guide the greedy search to reduce victim model calls. Specifically, we use one network to predict the importance of each word, without accessing the victim model. It avoids the victim model calls proportional to the length of the text, which may be in hundreds. Moreover, we use the other network to score each substitute word for a specific substitute position and filter out the attack-inefficient and out-of-context ones, so as to reduce substitution attempts. We evaluate our method on sentiment analysis, natural language inference and paraphrase identification tasks. Experimental results show that our method can achieve higher attack success rate and adversarial example quality than the baseline methods, while requiring less computation overhead. The code is public at https://github.com/CMACH508/PredictiveTextualAttack.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alzantot, M., Sharma, Y., Elgohary, A., Ho, B.J., Srivastava, M., Chang, K.W.: Generating natural language adversarial examples. In: Proceedings of EMNLP (2018)
Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of EMNLP (2015)
Chen, Y., Su, J., Wei, W.: Multi-granularity textual adversarial attack with behavior cloning (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL (2019)
Dong, Z., Dong, Q.: HowNet and the Computation of Meaning. World Scientific, Singapore (2006)
Garg, S., Ramakrishnan, G.: BAE: BERT-based adversarial examples for text classification. In: Proceedings of EMNLP (2020)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of ICLR (2015)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Jin, D., Jin, Z., Zhou, J.T., Szolovits, P.: Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In: Proceedings of AAAI (2020)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. ICLR Workshop (2017)
Li, D., et al.: Contextualized perturbation for textual adversarial attack. In: Proc. of NAACL (2021)
Li, L., Ma, R., Guo, Q., Xue, X., Qiu, X.: BERT-ATTACK: adversarial attack against BERT using BERT. In: Proceedings of EMNLP (2020)
Li, Z., et al.: Searching for an effective defender: benchmarking defense against adversarial word substitution (2021)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach (2019)
Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of ACL (2011)
Maheshwary, R., Maheshwary, S., Pudi, V.: A strong baseline for query efficient attacks in a black box setting (2021)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. In: Proceedings of EMNLP (2019)
Ren, S., Deng, Y., He, K., Che, W.: Generating natural language adversarial examples through probability weighted word saliency. In: Proceedings of ACL (2019)
Wolf, T., et al.: Transformers: state-of-the-art natural language processing. In: Proceedings of EMNLP (2020)
Yoo, J.Y., Morris, J., Lifland, E., Qi, Y.: Searching for a search method: benchmarking search algorithms for generating NLP adversarial examples. In: Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (2020)
Zang, Y., Hou, B., Qi, F., Liu, Z., Meng, X., Sun, M.: Learning to attack: towards textual adversarial attacking in real-world situations (2020)
Zang, Y., et al.: Word-level textual adversarial attacking as combinatorial optimization. In: Proceedings of ACL (2020)
Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: Proceedings of NeurIPS (2015)
Acknowledgement
This work is supported by the National Key R &D Program (2018AAA0100700) of the Ministry of Science and Technology of China, and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Guo, X., Tu, S., Xu, L. (2022). Learning to Generate Textual Adversarial Examples. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13529. Springer, Cham. https://doi.org/10.1007/978-3-031-15919-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-15919-0_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15918-3
Online ISBN: 978-3-031-15919-0
eBook Packages: Computer ScienceComputer Science (R0)