Skip to main content

Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6713))

Abstract

Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by “poisoning” its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is trained on a different bootstrap replicate of the training set. Recent work has shown that bagging can reduce the influence of outliers in training data, especially if the most outlying observations are resampled with a lower probability. In this work we argue that poisoning attacks can be viewed as a particular category of outliers, and, thus, bagging ensembles may be effectively exploited against them. We experimentally assess the effectiveness of bagging on a real, widely used spam filter, and on a web-based intrusion detection system. Our preliminary results suggest that bagging ensembles can be a very promising defence strategy against poisoning attacks, and give us valuable insights for future research work.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barreno, M., Nelson, B., Joseph, A., Tygar, J.: The security of machine learning. Machine Learning 81, 121–148 (2010)

    Article  Google Scholar 

  2. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proc. 2006 ACM Symp. Information, Computer and Comm. Sec. (ASIACCS 2006), NY, USA pp. 16–25 (2006)

    Google Scholar 

  3. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  Google Scholar 

  4. Chung, S.P., Mok, A.K.: Advanced allergy attacks: Does a corpus really help? In: Kruegel, C., Lippmann, R., Clark, A. (eds.) RAID 2007. LNCS, vol. 4637, pp. 236–255. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  5. Corona, I., Ariu, D., Giacinto, G.: Hmm-web: a framework for the detection of attacks against web applications. In: Proc. 2009 IEEE Int’l Conf. Comm. (ICC 2009), NJ, USA, pp. 747–752 (2009)

    Google Scholar 

  6. Dalvi, N., Domingos, P.: Mausam, S. Sanghai, and D. Verma. Adversarial classification. In: Proc. 10th ACM SIGKDD Int’l Conf. Knowledge Disc. and Data Mining (KDD), USA, pp. 99–108 (2004)

    Google Scholar 

  7. Fumera, G., Roli, F., Serrau, A.: A theoretical analysis of bagging as a linear combination of classifiers. IEEE TPAMI 30(7), 1293–1299 (2008)

    Article  Google Scholar 

  8. Gabrys, B., Baruque, B., Corchado, E.: Outlier resistant PCA ensembles. In: Gabrys, B., Howlett, R.J., Jain, L.C. (eds.) KES 2006. LNCS (LNAI), vol. 4253, pp. 432–440. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  9. Grandvalet, Y.: Bagging equalizes influence. Machine Learning 55, 251–270 (2004)

    Article  MATH  Google Scholar 

  10. Hall, P., Turlach, B.: Bagging in the presence of outliers. In: Scott, D. (ed.) Mining and Modeling Massive Data Sets In Science, Engineering, and Business, CSS, vol. 29, pp. 536–539 (1998)

    Google Scholar 

  11. Kloft, M., Laskov, P.: Online anomaly detection under adversarial impact. In: Proc. 13th Int’l Conf. Artificial Intell. and Statistics (AISTATS), pp. 405–412 (2010)

    Google Scholar 

  12. Kolcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: 6th Conf. Email and Anti-Spam (CEAS), CA, USA (2009)

    Google Scholar 

  13. Laskov, P., Lippmann, R.: Machine learning in adversarial environments. Machine Learning 81, 115–119 (2010)

    Article  Google Scholar 

  14. Nelson, B., Barreno, M., Chi, F.J., Joseph, A.D., Rubinstein, B.I.P., Saini, U., Sutton, C., Tygar, J.D., Xia, K.: Exploiting machine learning to subvert your spam filter. In: Proc. 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats (LEET 2008), CA, USA, pp. 1–9 (2008)

    Google Scholar 

  15. Robinson, G.: A statistical approach to the spam problem. Linux J (2001), http://www.linuxjournal.com/article/6467

  16. Perdisci, R., Dagon, D., Lee, W., Fogla, P., Sharif, M.: Misleading worm signature generators using deliberate noise injection. In: Proc. 2006 IEEE Symp. Sec. and Privacy (S&P 2006), USA (2006)

    Google Scholar 

  17. Rubinstein, B.I., Nelson, B., Huang, L., Joseph, A.D.: Antidote: understanding and defending against poisoning of anomaly detectors. In: Proc. 9th ACM Internet Meas. Conf. IMC 2009, pp. 1–14 (2009)

    Google Scholar 

  18. Segui, S., Igual, L., Vitria, J.: Weighted bagging for graph based one-class classifiers. In: Proc. 9th Int. Workshop on MCSs. LNCS, vol. 5997, pp. 1–10. Springer, Heidelberg (2010)

    Google Scholar 

  19. Shieh, A.D., Kamm, D.F.: Ensembles of one class support vector machines. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 181–190. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Biggio, B., Corona, I., Fumera, G., Giacinto, G., Roli, F. (2011). Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks. In: Sansone, C., Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2011. Lecture Notes in Computer Science, vol 6713. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21557-5_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21557-5_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21556-8

  • Online ISBN: 978-3-642-21557-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics