Abstract
Machine learning plays a significant role in today’s business sectors and governments, in which it is becoming more utilized as tools to help in decision making and automation process. However, these tools are not inherently robust and secure, and could be vulnerable to adversarial modification and cause false classification or risk in the system security. As such, the field of adversarial machine learning has emerged to study vulnerabilities of machine learning models and algorithms, and make them secure against adversarial manipulation. In this paper, we present the recently proposed taxonomy for attacks on machine learning and draw distinctions between other taxonomies. Moreover, this paper brings together the state of the art in theory and practice needed for decision timing attacks on machine learning and defense strategies against them. Considering the increasing research interest in this field, we hope this study provides readers with the essential knowledge to successfully engage in research and practice of machine learning in adversarial environment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Manifold learning algorithms build decision functions that are different along the manifolds occupied by the data. These different classes form separate manifolds, and the learning algorithms indirectly implement the cluster assumption by not cutting the manifolds [33].
References
Wolpert, D.: No free lunch theorem for optimization. IEEE Trans. Evol. Comput. 1, 467–482 (1997)
Alpaydin, E.: Introduction to Machine Learning/Ethem Alpaydin. The MIT Press, Cambridge (2010)
Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: AISTATS 2005, vol. 2005, pp. 57–64 (2005)
Yang, B., Sun, J.-T., Wang, T., Chen, Z.: Effective multi-label active learning for text classification. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 917–926 (2009)
Lughofer, E.: Hybrid active learning for reducing the annotation effort of operators in classification systems. Pattern Recognit. 45(2), 884–896 (2012)
Settles, B.: Active learning literature survey. 2010 Computer Sciences Technical Report 1648 (2014)
Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems, pp. 1289–1296 (2008)
Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)
Bojarski, M., et al.: End to end learning for self-driving cars. arXiv Prepr arXiv:1604.07316 (2016)
Chen, Z., Huang, X.: End-to-end learning for lane keeping of self-driving cars. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1856–1860 (2017)
Bhowmick, A., Hazarika, S.M.: E-mail spam filtering: a review of techniques and trends. In: Advances in Electronics, Communication and Computing, pp. 583–590. Springer (2018)
Melo-Acosta, G.E., Duitama-Muñoz, F., Arias-Londoño, J.D.: Fraud detection in big data using supervised and semi-supervised learning techniques. In: 2017 IEEE Colombian Conference on Communications and Computing (COLCOM), pp. 1–6 (2017)
Ye, Y., Li, T., Adjeroh, D., Iyengar, S.S.: A survey on malware detection using data mining techniques. ACM Comput. Surv. 50(3), 41 (2017)
Perdisci, R., Ariu, D., Giacinto, G.: Scalable fine-grained behavioral clustering of HTTP-based malware. Comput. Networks 57(2), 487–500 (2013)
Lakhina, A., Crovella, M., Diot, C.: Diagnosing network-wide traffic anomalies. ACM SIGCOMM Comput. Commun. Rev. 34(4), 219–230 (2004)
Wang, K., Parekh, J.J., Stolfo, S.J.: Anagram: a content anomaly detector resistant to mimicry attack. In: International Workshop on Recent Advances in Intrusion Detection, pp. 226–248 (2006)
Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25 (2006)
Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)
Suciu, O., Marginean, R., Kaya, Y., Daumé III, H., Dumitras, T.: When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. arXiv Prepr. arXiv:1803.06975 (2018)
Alfeld, S., Zhu, X., Barford, P.: Explicit defense actions against test-set attacks. In: AAAI, pp. 1274–1280 (2017)
Rosenberg, I., Shabtai, A., Rokach, L., Elovici, Y.: Generic black-box end-to-end attack against state of the art API call based malware classifiers. In: International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 490–510 (2018)
Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. arXiv Prepr arXiv:1801.01944 (2018)
Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers. In: Proceedings of the 2016 Network and Distributed Systems Symposium (2016)
Nelson, B., et al.: Query strategies for evading convex-inducing classifiers. J. Mach. Learn. Res. 13(May), 1293–1332 (2012)
Vorobeychik, Y., Kantarcioglu, M.: Adversarial machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–169 (2018)
Yao, G., Bi, J., Xiao, P.: Source address validation solution with OpenFlow/NOX architecture. In: 2011 19th IEEE International Conference on Network Protocols (ICNP), pp. 7–12 (2011)
Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)
Li, B., Vorobeychik, Y.: Scalable optimization of randomized operational decisions in adversarial classification settings. In: Artificial Intelligence and Statistics, pp. 599–607 (2015)
Tambe, M.: Security and Game Theory: Algorithms, Deployed Systems. Lessons Learned. Cambridge University Press, Cambridge (2011)
Miao, C., Li, Q., Xiao, H., Jiang, W., Huai, M., Su, L.: Towards data poisoning attacks in crowd sensing systems. In: Proceedings of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 111–120 (2018)
Goodfellow, I., McDaniel, P., Papernot, N.: Making machine learning robust against adversarial inputs. Commun. ACM 61(7), 56–66 (2018)
Hanif, M.A., Khalid, F., Putra, R.V.W., Rehman, S., Shafique, M.: Robust machine learning systems: reliability and security for deep neural networks. In: 2018 IEEE 24th International Symposium on On-Line Testing And Robust System Design (IOLTS), pp. 257–260 (2018)
Belkin, M., Matveeva, I., Niyogi, P.: Regularization and semi-supervised learning on large graphs. In: International Conference on Computational Learning Theory, pp. 624–638 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kianpour, M., Wen, SF. (2020). Timing Attacks on Machine Learning: State of the Art. In: Bi, Y., Bhatia, R., Kapoor, S. (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1037. Springer, Cham. https://doi.org/10.1007/978-3-030-29516-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-29516-5_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29515-8
Online ISBN: 978-3-030-29516-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)