skip to main content
10.1145/1654988.1654990acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

A framework for quantitative security analysis of machine learning

Published: 09 November 2009 Publication History

Abstract

We propose a framework for quantitative security analysis of machine learning methods. The key parts of this framework are the formal specification of a deployed learning model and attacker's constraints, the computation of an optimal attack, and the derivation of an upper bound on adversarial impact. We exemplarily apply the framework for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds.

References

[1]
M. Barreno, B. Nelson, R. Sears, A. Joseph, and J. Tygar. Can machine learning be secure? In ACM Symposium on Information, Computer and Communication Security, pages 16--25, 2006.
[2]
N. N. Dalvi, P. Domingos, Mausam, S. K. Sanghai, and D. Verma. Adversarial classification. In W. Kim, R. Kohavi, J. Gehrke, and W. DuMouchel, editors, KDD, pages 99--108. ACM, 2004.
[3]
P. Fogla and W. Lee. Evading network anomaly detection systems: formal reasoning and practical techniques. In ACM Conference on Computer and Communications Security, pages 59--68, 2006.
[4]
M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807--837, 1993.
[5]
M. Kloft and P. Laskov. A poisoning attack against online anomaly detection. In NIPS Workshop on Machine Learning in Adversarial Environments for Computer Security, 2007.
[6]
D. Lowd and C. Meek. Good word attacks on statistical spam filters. In Conference on Email and Anti-Spam, 2005.
[7]
B. Nelson and A. D. Joseph. Bounding an attack's complexity for a simple learning model. In Proc. of the First Workshop on Tackling Computer Systems Problems with Machine Learning Techniques (SysML), Saint-Malo, France, 2006.
[8]
R. Perdisci, D. Dagon, W. Lee, P. Fogla, and M. Sharif. Misleading worm signature generators using deliberate noise injection. In Proc. of IEEE Symposium on Security and Privacy, pages 17--31, 2006.
[9]
K. Rieck and P. Laskov. Linear-time computation of similarity measures for sequential data. Journal of Machine Learning Research, 9(Jan):23--48, 2008.
[10]
Y. Song, M. Locasto, A. Stavrou, A. Keromytis, and S. Stolfo. On the infeasibility of modeling polymorphic shellcode. In Conference on Computer and Communications Security (CCS), pages 541--551, 2007.
[11]
V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
[12]
S. Venkataraman, A. Blum, and D. Song. Limits of learning-based signature generation with adversaries. In NDSS. The Internet Society, 2008.

Cited By

View all
  • (2024)Application research of Webshell detection system based on deep learning2024 International Applied Computational Electromagnetics Society Symposium (ACES-China)10.1109/ACES-China62474.2024.10699885(1-3)Online publication date: 16-Aug-2024
  • (2022)Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image ClassificationIEEE Access10.1109/ACCESS.2022.320813110(102266-102291)Online publication date: 2022
  • (2022)Towards a robust and trustworthy machine learning system developmentJournal of Information Security and Applications10.1016/j.jisa.2022.10312165:COnline publication date: 1-Mar-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AISec '09: Proceedings of the 2nd ACM workshop on Security and artificial intelligence
November 2009
72 pages
ISBN:9781605587813
DOI:10.1145/1654988
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 November 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial learning
  2. centroid anomaly detection
  3. computer security
  4. intrusion detection
  5. machine learning

Qualifiers

  • Research-article

Conference

CCS '09
Sponsor:

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Application research of Webshell detection system based on deep learning2024 International Applied Computational Electromagnetics Society Symposium (ACES-China)10.1109/ACES-China62474.2024.10699885(1-3)Online publication date: 16-Aug-2024
  • (2022)Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image ClassificationIEEE Access10.1109/ACCESS.2022.320813110(102266-102291)Online publication date: 2022
  • (2022)Towards a robust and trustworthy machine learning system developmentJournal of Information Security and Applications10.1016/j.jisa.2022.10312165:COnline publication date: 1-Mar-2022
  • (2022)A review of spam email detection: analysis of spammer strategies and the dataset shift problemArtificial Intelligence Review10.1007/s10462-022-10195-456:2(1145-1173)Online publication date: 11-May-2022
  • (2020)The relationship between trust in AI and trustworthy machine learning technologiesProceedings of the 2020 Conference on Fairness, Accountability, and Transparency10.1145/3351095.3372834(272-283)Online publication date: 27-Jan-2020
  • (2020)A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine LearningIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2020.29689334:4(450-467)Online publication date: Aug-2020
  • (2020)When Machine Learning Meets Privacy in 6G: A SurveyIEEE Communications Surveys & Tutorials10.1109/COMST.2020.301156122:4(2694-2724)Online publication date: Dec-2021
  • (2020)Seven Pitfalls of Using Data Science in CybersecurityData Science in Cybersecurity and Cyberthreat Intelligence10.1007/978-3-030-38788-4_6(115-129)Online publication date: 6-Feb-2020
  • (2020)Adversarial machine learning for cybersecurity and computer visionWIREs Computational Statistics10.1002/wics.151112:5Online publication date: 7-Aug-2020
  • (2019)Robust Fake News Detection Over Time and AttackACM Transactions on Intelligent Systems and Technology10.1145/336381811:1(1-23)Online publication date: 14-Dec-2019
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media