skip to main content
10.1145/3302425.3302472acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacaiConference Proceedingsconference-collections
research-article

Adversarial Attacks on Word2vec and Neural Network

Published: 21 December 2018 Publication History

Abstract

The security of machine learning is of great importance. Image vulnerability has been known in the literature for a long time. This paper is concerned with the text vulnerability. The word2vec is widely used to produce the word embedding, which plays an important role in natural language processing. The quality of word embedding affects the performance of the neural network. Using word embedding is to find the useful information that may exist between the individual words. This paper proposes a method to alter the original text, whose word embeddings also change. And the adversarial samples are able to make the classifier make a mistake.

References

[1]
NIE S, LIN L, DU Y, et al. Free-Fall: Hacking Tesla from Wireless to Can Bus{C}//BlackHat, Las Vegas, c2017.
[2]
Silver D, et al., Mastering the game of Go with deep neural networks and tree search{J}. Nature, 2016, 529(7587): 484-+.
[3]
Tomas M, et al. Efficient Estimation of Word Representations in Vector Space{J}. CoRR, 2013. abs/1301.3781.
[4]
Biggio B, B Nelson, and P Laskov, Poisoning attacks against support vector machines{C}//the 29th International Coference on International Conference on Machine Learning, Edinburgh Scotland, c2012:1467--1474.
[5]
Papernot N, et al. The Limitations of Deep Learning in Adversarial Settings{C}//IEEE European Symposium on Security and Privacy, the Congress Center Saar, Saarbrücken GERMANY, c2016.
[6]
Krizhevsky A, I Sutskever, and G.E. Hinton, ImageNet classification with deep convolutional neural networks{J}. Commun. ACM, 2017. 60(6): 84--90.
[7]
Szegedy C et al. Intriguing properties of neural networks{J}. CoRR, 2013. abs/1312.6199.
[8]
Behzadan V, Munir A, et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks{C}//MLDM, Cham, c2017.
[9]
Huang S.P., Nicolas; Goodfellow, Ian; Duan, Yan; Abbeel, Pieter, Adversarial Attacks on Neural Network Policies{J}. CoRR, 2017.
[10]
Kurakin, A., I.J. Goodfellow, S. Bengio, Adversarial examples in the physical world{J}. ArXiv e-prints, 2016. abs/1607.02533.
[11]
Papernot, N., et al. Practical Black-Box Attacks against Machine Learning{C}//the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi United Arab Emirates, c2017:506--519.
[12]
Liang B., et al. Deep Text Classification Can be Fooled{C}// IJCAI, Stockholm Sweden, c2018:4208--4215.
[13]
Samanta S., S. Mehta, Towards. Crafting Text Adversarial Samples{J}. CoRR, 2017. abs/1707.02812.
[14]
Le, Q., T. Mikolov Distributed representations of sentences and documents{C}// the 31st International Conference on International Conference on Machine Learning, Beijing, China. c2014:II-1188-II-1196.
[15]
Maas A.L., et al., Learning word vectors for sentiment analysis{C}//the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland Oregon USA. C2011:Volume 1-142--150.

Cited By

View all
  • (2023)Semi-Supervised Model for Aspect Sentiment DetectionInformation10.3390/info1405029314:5(293)Online publication date: 16-May-2023
  • (2019)Detecting Spam Images with Embedded Arabic Text in Twitter2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)10.1109/ICDARW.2019.50107(1-6)Online publication date: Sep-2019

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACAI '18: Proceedings of the 2018 International Conference on Algorithms, Computing and Artificial Intelligence
December 2018
460 pages
ISBN:9781450366250
DOI:10.1145/3302425
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • The Hong Kong Polytechnic: The Hong Kong Polytechnic University
  • City University of Hong Kong: City University of Hong Kong

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 December 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial sample
  2. Neural network
  3. Text classification
  4. Vulnerability
  5. Word2vec

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ACAI 2018

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Semi-Supervised Model for Aspect Sentiment DetectionInformation10.3390/info1405029314:5(293)Online publication date: 16-May-2023
  • (2019)Detecting Spam Images with Embedded Arabic Text in Twitter2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)10.1109/ICDARW.2019.50107(1-6)Online publication date: Sep-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media