Skip to main content

One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9132))

Abstract

Pattern classifiers have been widely used in adversarial settings like spam and malware detection, although they have not been originally designed to cope with intelligent attackers that manipulate data at test time to evade detection. While a number of adversary-aware learning algorithms have been proposed, they are computationally demanding and aim to counter specific kinds of adversarial data manipulation. In this work, we overcome these limitations by proposing a multiple classifier system capable of improving security against evasion attacks at test time by learning a decision function that more tightly encloses the legitimate samples in feature space, without significantly compromising accuracy in the absence of attack. Since we combine a set of one-class and two-class classifiers to this end, we name our approach one-and-a-half-class (1.5C) classification. Our proposal is general and it can be used to improve the security of any classifier against evasion attacks at test time, as shown by the reported experiments on spam and malware detection.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Typically, the underlying process \(p\) is not known, and we are only given a finite set of samples ideally drawn from it. Then, the task of learning amounts to minimizing a trade-off between the empirical risk computed on such set and a regularization term (or a restricted class of functions) to avoid overfitting [15].

  2. 2.

    http://spamassassin.apache.org/, http://spambayes.sourceforge.net/.

References

  1. Dalvi, N., Domingos, P., Mausam, Sanghai, S., Verma, D.: Adversarial classification. In: 10th International Conference on Knowledge Discovery and Data Mining. ACM, pp. 99–108 (2004)

    Google Scholar 

  2. Lowd, D., Meek, C.: Adversarial learning. In: 11th International Conference on Knowledge Discovery and Data Mining. ACM, pp. 641–647 (2005)

    Google Scholar 

  3. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ASIA CCS. ACM, pp. 16–25 (2006)

    Google Scholar 

  4. Barreno, M., Nelson, B., Joseph, A., Tygar, J.: The security of machine learning. Mach. Learn. 81, 121–148 (2010)

    Article  MathSciNet  Google Scholar 

  5. Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B., Tygar, J.D.: Adversarial machine learning. In: 4th Workshop Artificial Intelligence and Security. ACM, pp. 43–57 (2011)

    Google Scholar 

  6. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)

    Article  Google Scholar 

  7. Biggio, B., Fumera, G., Roli, F.: Pattern recognition systems under attack: design issues and research challenges. Int. J. Pattern Recogn. Artif. Intell. 28(7), 21 (2014)

    Article  Google Scholar 

  8. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013, Part III. LNCS, vol. 8190, pp. 387–402. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  9. Gori, M., Scarselli, F.: Are multilayer perceptrons adequate for pattern recognition and verification? IEEE TPAMI 20(11), 1121–1132 (1998)

    Article  Google Scholar 

  10. Tax, D.M.J.: One-class classification. Ph.D. thesis (2001)

    Google Scholar 

  11. Biggio, B., Fumera, G., Roli, F.: Design of robust classifiers for adversarial environments. In: IEEE International Conference on Systems, Man, and Cybernetics, pp. 977–982 (2011)

    Google Scholar 

  12. Globerson, A., Roweis, S.: Nightmare at test time: robust learning by feature deletion. In: 23rd ICML, vol. 148, pp. 353–360. ACM (2006)

    Google Scholar 

  13. Teo, C.H., Globerson, A., Roweis, S., Smola, A.: Convex learning with invariances. In: Platt, J., et al., eds.: NIPS 20, pp. 1489–1496. MIT Press (2008)

    Google Scholar 

  14. Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13, 2617–2654 (2012)

    MATH  MathSciNet  Google Scholar 

  15. Vapnik, V.N.: The nature of statistical learning theory. Springer, New York (1995)

    Book  MATH  Google Scholar 

  16. Nelson, B., Rubinstein, B.I., Huang, L., Joseph, A.D., Lee, S.J., Rao, S., Tygar, J.D.: Query strategies for evading convex-inducing classifiers. J. Mach. Learn. Res. 13, 1293–1332 (2012)

    MATH  MathSciNet  Google Scholar 

  17. Nelson, B. et al.: Exploiting machine learning to subvert your spam filter. In: Large-scale Exploits and Emergent Threats, USENIX, pp. 1–9 (2008)

    Google Scholar 

  18. Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems for robust classifier design in adversarial environments. Int. J. Mach. Learn. Cyb. 1(1), 27–41 (2010)

    Article  Google Scholar 

  19. Jorgensen, Z., Zhou, Y., Inge, M.: A multiple instance learning strategy for combating good word attacks on spam filters. J. Mach. Learn. Res. 9, 1115–1146 (2008)

    Google Scholar 

  20. Cormack, G.V.: Trec 2007 spam track overview. In: Voorhees, E.M., Buckland, L.P., eds.: TREC, pp. 500–274. Volume Special Publication, NIST (2007)

    Google Scholar 

  21. Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. 34, 1–47 (2002)

    Article  MathSciNet  Google Scholar 

  22. Maiorca, D., Corona, I., Giacinto, G.: Looking at the bag is not enough to find the bomb: an evasion of structural methods for malicious PDF files detection. In: 8th ASIA CCS, pp. 119–130. ACM (2013)

    Google Scholar 

  23. Kolcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: 6th Conference on Email and Anti-spam (2009)

    Google Scholar 

  24. Sutton, C., Sindelar, M., McCallum, A.: Feature bagging: preventing weight undertraining in structured discriminative learning. Technical report, IR-402, University of Massachusetts (2005)

    Google Scholar 

  25. Zhou, Y., Kantarcioglu, M., Thuraisingham, B., Xi, B.: Adversarial support vector machine learning. In: 18th International Conference on Knowledge Discovery and Data Mining, pp. 1059–1067. ACM (2012)

    Google Scholar 

  26. Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems for adversarial classification tasks. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 132–141. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  27. Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems under attack. In: El Gayar, N., Kittler, J., Roli, F. (eds.) MCS 2010. LNCS, vol. 5997, pp. 74–83. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

Download references

Acknowledgments

This work has been partly supported by the projects CRP-18293 and CRP-59872, both funded by Regione Autonoma della Sardegna, L.R. 7/2007, respectively with Bando 2009 and Bando 2012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Battista Biggio .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Biggio, B. et al. (2015). One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time. In: Schwenker, F., Roli, F., Kittler, J. (eds) Multiple Classifier Systems. MCS 2015. Lecture Notes in Computer Science(), vol 9132. Springer, Cham. https://doi.org/10.1007/978-3-319-20248-8_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-20248-8_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-20247-1

  • Online ISBN: 978-3-319-20248-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics