ABSTRACT
Machine learning approaches have been increasingly applied to various applications for data analytics (e.g. spam filtering, image classification). Further, with the growing adoption of cloud computing, various cloud services have provided an efficient way for users to train, store or deploy machine learning algorithms in an easy-to-use manner. However, the models deployed in the cloud may be exposed to potential malicious attacks launched at the server side. Attackers with access to the server can stealthily manipulate a machine learning model so as to enable misclassification or introduce bias. In this work, we study the problem of manipulation attacks as they occur at the server side. We consider not only traditional supervised learning models but also state-of-the-art deep learning models. In particular, a simple but effective gradient descent based approach is presented to exploit Logistic Regression (LR) and Convolutional Neural Networks (CNN) [16] models. We evaluate manipulation attacks against machine learning or deep learning systems using both Enron email text and MINIST image dataset [17]. Experimental results have demonstrated such attacks can manipulate the model that allows malicious samples to evade detection easily without compromising the overall performance of the systems.
- Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et almbox. . 2016. TensorFlow: A System for Large-Scale Machine Learning. OSDI, Vol. Vol. 16. 265--283. Google ScholarDigital Library
- Roger Barga and Valentine Fontama. {n. d.}. Predictive analytics with Microsoft Azure machine learning. Springer. Google ScholarDigital Library
- Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim vSrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 387--402. Google ScholarDigital Library
- Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012). Google ScholarDigital Library
- Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. Proceedings of COMPSTAT'2010. Springer, 177--186.Google ScholarCross Ref
- Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating evasion attacks to deep neural networks via region-based classification Proceedings of the 33rd Annual Computer Security Applications Conference. ACM, 278--287. Google ScholarDigital Library
- Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, Vol. 20, 3 (1995), 273--297. Google ScholarDigital Library
- Harris Drucker, Donghui Wu, and Vladimir N Vapnik. 1999. Support vector machines for spam categorization. IEEE Transactions on Neural networks Vol. 10, 5 (1999), 1048--1054. Google ScholarDigital Library
- Julian Fierrez-Aguilar, Javier Ortega-Garcia, Joaquin Gonzalez-Rodriguez, and Josef Bigun. 2005. Discriminative multimodal biometric authentication based on quality measures. Pattern recognition, Vol. 38, 5 (2005), 777--779. Google ScholarDigital Library
- Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 1322--1333. Google ScholarDigital Library
- Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. {n. d.}. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.Google Scholar
- Google. {n. d.}. TensorFlow. https://www.tensorflow.org/. (. {n. d.}).Google Scholar
- Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel. 2017. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017).Google Scholar
- Uyeong Jang, Xi Wu, and Somesh Jha. 2017. Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning Proceedings of the 33rd Annual Computer Security Applications Conference. ACM, 262--277. Google ScholarDigital Library
- Bryan Klimt and Yiming Yang. 2004. Introducing the Enron Corpus.. In CEAS.Google Scholar
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks Advances in neural information processing systems. 1097--1105. Google ScholarDigital Library
- Yann LeCun, Corinna Cortes, and Christopher JC Burges. 2010. MNIST handwritten digit database. AT&T Labs {Online}. Available: http://yann. lecun. com/exdb/mnist Vol. 2 (2010).Google Scholar
- Patrick McDaniel, Nicolas Papernot, and Z Berkay Celik . 2016. Machine learning in adversarial settings. IEEE Security & Privacy Vol. 14, 3 (2016), 68--72. Google ScholarDigital Library
- Vangelis Metsis, Ion Androutsopoulos, and Georgios Paliouras. 2006. Spam filtering with naive bayes-which naive bayes? CEAS, Vol. Vol. 17. 28--69.Google Scholar
- Microsoft . {n. d.}. Azure Machine Learning Studio. https://studio.azureml.net/. (. {n. d.}).Google Scholar
- Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Mehran Mozaffari-Kermani, Susmita Sur-Kolay, Anand Raghunathan, and Niraj K Jha. 2015. Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, Vol. 19, 6 (2015), 1893--1905.Google Scholar
- Srinivas Mukkamala, Guadalupe Janoski, and Andrew Sung. 2002. Intrusion detection using neural networks and support vector machines Neural Networks, 2002. IJCNN'02. Proceedings of the 2002 International Joint Conference on, Vol. Vol. 2. IEEE, 1702--1707.Google Scholar
- Michael A Nielsen. 2015. Neural networks and deep learning. (2015).Google Scholar
- Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, 506--519. Google ScholarDigital Library
- Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 372--387.Google Scholar
- David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, et almbox. . {n. d.}. Learning representations by back-propagating errors. Cognitive modeling, Vol. 5, 3 (. {n. d.}), 1.Google Scholar
- Shiqi Shen, Shruti Tople, and Prateek Saxena. 2016. A uror: defending against poisoning attacks in collaborative deep learning systems Proceedings of the 32nd Annual Conference on Computer Security Applications. ACM, 508--519. Google ScholarDigital Library
- Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging Vol. 35, 5 (2016), 1285--1298.Google Scholar
- Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. USENIX Security Symposium. 601--618.Google ScholarDigital Library
Index Terms
- Server-Based Manipulation Attacks Against Machine Learning Models
Recommendations
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the ...
Practical Black-Box Attacks against Machine Learning
ASIA CCS '17: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications SecurityMachine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having ...
Adversarial machine learning in IoT from an insider point of view
AbstractWith the rapid progress and significant successes in various applications, machine learning has been considered a crucial component in the Internet of Things ecosystem. However, machine learning models have recently been vulnerable to ...
Comments