skip to main content
10.1145/3176258.3176321acmconferencesArticle/Chapter ViewAbstractPublication PagescodaspyConference Proceedingsconference-collections
research-article
Public Access

Server-Based Manipulation Attacks Against Machine Learning Models

Authors Info & Claims
Published:13 March 2018Publication History

ABSTRACT

Machine learning approaches have been increasingly applied to various applications for data analytics (e.g. spam filtering, image classification). Further, with the growing adoption of cloud computing, various cloud services have provided an efficient way for users to train, store or deploy machine learning algorithms in an easy-to-use manner. However, the models deployed in the cloud may be exposed to potential malicious attacks launched at the server side. Attackers with access to the server can stealthily manipulate a machine learning model so as to enable misclassification or introduce bias. In this work, we study the problem of manipulation attacks as they occur at the server side. We consider not only traditional supervised learning models but also state-of-the-art deep learning models. In particular, a simple but effective gradient descent based approach is presented to exploit Logistic Regression (LR) and Convolutional Neural Networks (CNN) [16] models. We evaluate manipulation attacks against machine learning or deep learning systems using both Enron email text and MINIST image dataset [17]. Experimental results have demonstrated such attacks can manipulate the model that allows malicious samples to evade detection easily without compromising the overall performance of the systems.

References

  1. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et almbox. . 2016. TensorFlow: A System for Large-Scale Machine Learning. OSDI, Vol. Vol. 16. 265--283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Roger Barga and Valentine Fontama. {n. d.}. Predictive analytics with Microsoft Azure machine learning. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim vSrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 387--402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. Proceedings of COMPSTAT'2010. Springer, 177--186.Google ScholarGoogle ScholarCross RefCross Ref
  6. Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating evasion attacks to deep neural networks via region-based classification Proceedings of the 33rd Annual Computer Security Applications Conference. ACM, 278--287. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, Vol. 20, 3 (1995), 273--297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Harris Drucker, Donghui Wu, and Vladimir N Vapnik. 1999. Support vector machines for spam categorization. IEEE Transactions on Neural networks Vol. 10, 5 (1999), 1048--1054. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Julian Fierrez-Aguilar, Javier Ortega-Garcia, Joaquin Gonzalez-Rodriguez, and Josef Bigun. 2005. Discriminative multimodal biometric authentication based on quality measures. Pattern recognition, Vol. 38, 5 (2005), 777--779. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 1322--1333. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. {n. d.}. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.Google ScholarGoogle Scholar
  12. Google. {n. d.}. TensorFlow. https://www.tensorflow.org/. (. {n. d.}).Google ScholarGoogle Scholar
  13. Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel. 2017. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017).Google ScholarGoogle Scholar
  14. Uyeong Jang, Xi Wu, and Somesh Jha. 2017. Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning Proceedings of the 33rd Annual Computer Security Applications Conference. ACM, 262--277. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Bryan Klimt and Yiming Yang. 2004. Introducing the Enron Corpus.. In CEAS.Google ScholarGoogle Scholar
  16. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks Advances in neural information processing systems. 1097--1105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Yann LeCun, Corinna Cortes, and Christopher JC Burges. 2010. MNIST handwritten digit database. AT&T Labs {Online}. Available: http://yann. lecun. com/exdb/mnist Vol. 2 (2010).Google ScholarGoogle Scholar
  18. Patrick McDaniel, Nicolas Papernot, and Z Berkay Celik . 2016. Machine learning in adversarial settings. IEEE Security & Privacy Vol. 14, 3 (2016), 68--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Vangelis Metsis, Ion Androutsopoulos, and Georgios Paliouras. 2006. Spam filtering with naive bayes-which naive bayes? CEAS, Vol. Vol. 17. 28--69.Google ScholarGoogle Scholar
  20. Microsoft . {n. d.}. Azure Machine Learning Studio. https://studio.azureml.net/. (. {n. d.}).Google ScholarGoogle Scholar
  21. Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  22. Mehran Mozaffari-Kermani, Susmita Sur-Kolay, Anand Raghunathan, and Niraj K Jha. 2015. Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, Vol. 19, 6 (2015), 1893--1905.Google ScholarGoogle Scholar
  23. Srinivas Mukkamala, Guadalupe Janoski, and Andrew Sung. 2002. Intrusion detection using neural networks and support vector machines Neural Networks, 2002. IJCNN'02. Proceedings of the 2002 International Joint Conference on, Vol. Vol. 2. IEEE, 1702--1707.Google ScholarGoogle Scholar
  24. Michael A Nielsen. 2015. Neural networks and deep learning. (2015).Google ScholarGoogle Scholar
  25. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, 506--519. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 372--387.Google ScholarGoogle Scholar
  27. David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, et almbox. . {n. d.}. Learning representations by back-propagating errors. Cognitive modeling, Vol. 5, 3 (. {n. d.}), 1.Google ScholarGoogle Scholar
  28. Shiqi Shen, Shruti Tople, and Prateek Saxena. 2016. A uror: defending against poisoning attacks in collaborative deep learning systems Proceedings of the 32nd Annual Conference on Computer Security Applications. ACM, 508--519. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging Vol. 35, 5 (2016), 1285--1298.Google ScholarGoogle Scholar
  30. Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. USENIX Security Symposium. 601--618.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Server-Based Manipulation Attacks Against Machine Learning Models

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CODASPY '18: Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy
        March 2018
        401 pages
        ISBN:9781450356329
        DOI:10.1145/3176258

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 13 March 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        CODASPY '18 Paper Acceptance Rate23of110submissions,21%Overall Acceptance Rate149of789submissions,19%

        Upcoming Conference

        CODASPY '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader