Skip to main content

Timing Attacks on Machine Learning: State of the Art

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2019)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1037))

Included in the following conference series:

Abstract

Machine learning plays a significant role in today’s business sectors and governments, in which it is becoming more utilized as tools to help in decision making and automation process. However, these tools are not inherently robust and secure, and could be vulnerable to adversarial modification and cause false classification or risk in the system security. As such, the field of adversarial machine learning has emerged to study vulnerabilities of machine learning models and algorithms, and make them secure against adversarial manipulation. In this paper, we present the recently proposed taxonomy for attacks on machine learning and draw distinctions between other taxonomies. Moreover, this paper brings together the state of the art in theory and practice needed for decision timing attacks on machine learning and defense strategies against them. Considering the increasing research interest in this field, we hope this study provides readers with the essential knowledge to successfully engage in research and practice of machine learning in adversarial environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Manifold learning algorithms build decision functions that are different along the manifolds occupied by the data. These different classes form separate manifolds, and the learning algorithms indirectly implement the cluster assumption by not cutting the manifolds [33].

References

  1. Wolpert, D.: No free lunch theorem for optimization. IEEE Trans. Evol. Comput. 1, 467–482 (1997)

    Article  Google Scholar 

  2. Alpaydin, E.: Introduction to Machine Learning/Ethem Alpaydin. The MIT Press, Cambridge (2010)

    MATH  Google Scholar 

  3. Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: AISTATS 2005, vol. 2005, pp. 57–64 (2005)

    Google Scholar 

  4. Yang, B., Sun, J.-T., Wang, T., Chen, Z.: Effective multi-label active learning for text classification. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 917–926 (2009)

    Google Scholar 

  5. Lughofer, E.: Hybrid active learning for reducing the annotation effort of operators in classification systems. Pattern Recognit. 45(2), 884–896 (2012)

    Article  Google Scholar 

  6. Settles, B.: Active learning literature survey. 2010 Computer Sciences Technical Report 1648 (2014)

    Google Scholar 

  7. Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems, pp. 1289–1296 (2008)

    Google Scholar 

  8. Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  9. Bojarski, M., et al.: End to end learning for self-driving cars. arXiv Prepr arXiv:1604.07316 (2016)

  10. Chen, Z., Huang, X.: End-to-end learning for lane keeping of self-driving cars. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1856–1860 (2017)

    Google Scholar 

  11. Bhowmick, A., Hazarika, S.M.: E-mail spam filtering: a review of techniques and trends. In: Advances in Electronics, Communication and Computing, pp. 583–590. Springer (2018)

    Google Scholar 

  12. Melo-Acosta, G.E., Duitama-Muñoz, F., Arias-Londoño, J.D.: Fraud detection in big data using supervised and semi-supervised learning techniques. In: 2017 IEEE Colombian Conference on Communications and Computing (COLCOM), pp. 1–6 (2017)

    Google Scholar 

  13. Ye, Y., Li, T., Adjeroh, D., Iyengar, S.S.: A survey on malware detection using data mining techniques. ACM Comput. Surv. 50(3), 41 (2017)

    Article  Google Scholar 

  14. Perdisci, R., Ariu, D., Giacinto, G.: Scalable fine-grained behavioral clustering of HTTP-based malware. Comput. Networks 57(2), 487–500 (2013)

    Article  Google Scholar 

  15. Lakhina, A., Crovella, M., Diot, C.: Diagnosing network-wide traffic anomalies. ACM SIGCOMM Comput. Commun. Rev. 34(4), 219–230 (2004)

    Article  Google Scholar 

  16. Wang, K., Parekh, J.J., Stolfo, S.J.: Anagram: a content anomaly detector resistant to mimicry attack. In: International Workshop on Recent Advances in Intrusion Detection, pp. 226–248 (2006)

    Google Scholar 

  17. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25 (2006)

    Google Scholar 

  18. Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)

    Article  MathSciNet  Google Scholar 

  19. Suciu, O., Marginean, R., Kaya, Y., Daumé III, H., Dumitras, T.: When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. arXiv Prepr. arXiv:1803.06975 (2018)

  20. Alfeld, S., Zhu, X., Barford, P.: Explicit defense actions against test-set attacks. In: AAAI, pp. 1274–1280 (2017)

    Google Scholar 

  21. Rosenberg, I., Shabtai, A., Rokach, L., Elovici, Y.: Generic black-box end-to-end attack against state of the art API call based malware classifiers. In: International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 490–510 (2018)

    Chapter  Google Scholar 

  22. Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. arXiv Prepr arXiv:1801.01944 (2018)

  23. Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers. In: Proceedings of the 2016 Network and Distributed Systems Symposium (2016)

    Google Scholar 

  24. Nelson, B., et al.: Query strategies for evading convex-inducing classifiers. J. Mach. Learn. Res. 13(May), 1293–1332 (2012)

    MathSciNet  MATH  Google Scholar 

  25. Vorobeychik, Y., Kantarcioglu, M.: Adversarial machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–169 (2018)

    Article  Google Scholar 

  26. Yao, G., Bi, J., Xiao, P.: Source address validation solution with OpenFlow/NOX architecture. In: 2011 19th IEEE International Conference on Network Protocols (ICNP), pp. 7–12 (2011)

    Google Scholar 

  27. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)

    Article  Google Scholar 

  28. Li, B., Vorobeychik, Y.: Scalable optimization of randomized operational decisions in adversarial classification settings. In: Artificial Intelligence and Statistics, pp. 599–607 (2015)

    Google Scholar 

  29. Tambe, M.: Security and Game Theory: Algorithms, Deployed Systems. Lessons Learned. Cambridge University Press, Cambridge (2011)

    Book  Google Scholar 

  30. Miao, C., Li, Q., Xiao, H., Jiang, W., Huai, M., Su, L.: Towards data poisoning attacks in crowd sensing systems. In: Proceedings of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 111–120 (2018)

    Google Scholar 

  31. Goodfellow, I., McDaniel, P., Papernot, N.: Making machine learning robust against adversarial inputs. Commun. ACM 61(7), 56–66 (2018)

    Article  Google Scholar 

  32. Hanif, M.A., Khalid, F., Putra, R.V.W., Rehman, S., Shafique, M.: Robust machine learning systems: reliability and security for deep neural networks. In: 2018 IEEE 24th International Symposium on On-Line Testing And Robust System Design (IOLTS), pp. 257–260 (2018)

    Google Scholar 

  33. Belkin, M., Matveeva, I., Niyogi, P.: Regularization and semi-supervised learning on large graphs. In: International Conference on Computational Learning Theory, pp. 624–638 (2004)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mazaher Kianpour .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kianpour, M., Wen, SF. (2020). Timing Attacks on Machine Learning: State of the Art. In: Bi, Y., Bhatia, R., Kapoor, S. (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1037. Springer, Cham. https://doi.org/10.1007/978-3-030-29516-5_10

Download citation

Publish with us

Policies and ethics