Skip to main content
Log in

Label flipping attacks against Naive Bayes on spam filtering systems

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Label flipping attack is a poisoning attack that flips the labels of training samples to reduce the classification performance of the model. Robustness is used to measure the applicability of machine learning algorithms to adversarial attack. Naive Bayes (NB) algorithm is a anti-noise and robust machine learning technique. It shows good robustness when dealing with issues such as document classification and spam filtering. Here we propose two novel label flipping attacks to evaluate the robustness of NB under label noise. For the three datasets of Spambase, TREC 2006c and TREC 2007 in the spam classification domain, our attack goal is to increase the false negative rate of NB under the influence of label noise without affecting normal mail classification. Our evaluation shows that at a noise level of 20%, the false negative rate of Spambase and TREC 2006c has increased by about 20%, and the test error of the TREC 2007 dataset has increased to nearly 30%. We compared the classification accuracy of five classic machine learning algorithms (random forest(RF), support vector machine(SVM), decision tree(DT), logistic regression(LR), and NB) and two deep learning models(AlexNet, LeNet) under the proposed label flipping attacks. The experimental results show that two label noises are suitable for various classification models and effectively reduce the accuracy of the models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Akhtar Z, Biggio B, Fumera G, Marcialis GL (2011) Robustness of multi-modal biometric systems under realistic spoof attacks against all traits. In: 2011 IEEE workshop on biometric measurements and systems for security and medical applications (BIOMS). IEEE, pp 1–6

  2. Androutsopoulos I, Paliouras G, Michelakis E (2004) Learning to filter unsolicited commercial e-mail. DEMOKRITOS national center for scientific research

  3. Barreno M, Nelson B, Joseph AD, Tygar JD (2010) The security of machine learning. Mach Learn 81(2):121–148

    Article  MathSciNet  Google Scholar 

  4. Barreno M, Nelson B, Sears R, Joseph A, Tygar J (2006) Can machine learning be secure? pp 16–25, https://doi.org/10.1145/1128817.1128824

  5. Biggio B, Corona I, Nelson B, Rubinstein BI, Maiorca D, Fumera G, Giacinto G, Roli F (2014) Security evaluation of support vector machines in adversarial environments. In: Support vector machines applications. Springer, New York, pp 105–153

  6. Biggio B, Didaci L, Fumera G, Roli F (2013) Poisoning attacks to compromise face templates. In: 2013 international conference on biometrics (ICB). IEEE, pp 1–7, https://doi.org/10.1109/ICB.2013.6613006

  7. Biggio B, Fumera G, Roli F (2013) Security evaluation of pattern classifiers under attack. IEEE Trans Knowl Data Eng 26(4):984–996

    Article  Google Scholar 

  8. Biggio B, Nelson B, Laskov P (2011) Support vector machines under adversarial label noise. 20:97–112

  9. Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines. In: Proceedings of the 29th international coference on international conference on machine learning, ICML’12. Omnipress, Madison, WI, USA, pp 1467–1474

  10. Carlini N, Mishra P, Vaidya T, Zhang Y, Sherr M, Shields C, Wagner D, Zhou W (2016) Hidden voice commands. In: 25th {USENIX} security symposium ({USENIX} security 16), pp 513–530

  11. Corona I, Maiorca D, Ariu D, Giacinto G (2014) Lux0r: detection of malicious pdf-embedded javascript code through discriminant analysis of api references. In: Proceedings of the 2014 workshop on artificial intelligent and security workshop, pp 47–57, https://doi.org/10.1145/2666652.2666657

  12. Dada EG, Bassi JS, Chiroma H, Abdulhamid SM, Adetunmbi AO, Ajibuwa OE (2019) Machine learning for email spam filtering: review, approaches and open research problems. Heliyon 5(6):e01802

    Article  Google Scholar 

  13. Dalvi N, Domingos P, Sanghai S, Verma D (2004) Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, pp 99–108, https://doi.org/10.1145/1014052.1014066

  14. Dang H, Huang Y, Chang EC (2017) Evading classifiers by morphing in the dark. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp 119–133, https://doi.org/10.1145/3133956.3133978

  15. Finlayson S, Bowers J, Ito J, Zittrain J, Beam A, Kohane I (2019) Adversarial attacks on medical machine learning. Science 363:1287–1289. https://doi.org/10.1126/science.aaw4399

    Article  Google Scholar 

  16. Fogla P, Sharif MI, Perdisci R, Kolesnikov OM, Lee W (2006) Polymorphic blending attacks. In: USENIX security symposium, pp 241–256

  17. Frenay B, Verleysen M (2014) Classification in the presence of label noise: a survey. IEEE Trans Neural Netw Learn Syst 25(5):845–869. https://doi.org/10.1109/TNNLS.2013.2292894

    Article  MATH  Google Scholar 

  18. Gangavarapu T, Jaidhar CD, Chanduka B (2020) Applicability of machine learning in spam and phishing email filtering: review and approaches. Artif Intell Rev 53(7):5019–5081

    Article  Google Scholar 

  19. Gao CZ, Cheng Q, He P, Susilo W, Li J (2018) Privacy-preserving naive bayes classifiers secure against the substitution-then-comparison attack. Inf Sci 444:72–88. https://doi.org/10.1016/j.ins.2018.02.058

    Article  MathSciNet  MATH  Google Scholar 

  20. Imam NH, Vassilakis VG (2019) A survey of attacks against twitter spam detectors in an adversarial environment. Robotics 8(3):50. https://doi.org/10.3390/robotics8030050

    Article  Google Scholar 

  21. Jiang W, Li H, Liu S, Ren Y, He M (2019) A flexible poisoning attack against machine learning. In: 2019 IEEE international conference on communications (ICC), pp 1–6, https://doi.org/10.1109/ICC.2019.8761422

  22. Johnson PA, Tan B, Schuckers S (2010) Multimodal fusion vulnerability to non-zero effort (spoof) imposters. In: 2010 IEEE international workshop on information forensics and security. IEEE, pp 1–5, https://doi.org/10.1109/WIFS.2010.5711469

  23. Kayacik HG, Zincir-Heywood AN, Heywood MI (2007) Automatically evading ids using gp authored attacks. In: 2007 IEEE symposium on computational intelligence in security and defense applications. IEEE, pp 153–160, https://doi.org/10.1109/CISDA.2007.368148

  24. Khorshidpour Z, Hashemi S, Hamzeh A (2016) Learning a secure classifier against evasion attack. In: 2016 IEEE 16th international conference on data mining workshops (ICDMW). IEEE, pp 295–302, https://doi.org/10.1109/ICDMW.2016.0049

  25. Khorshidpour Z, Hashemi S, Hamzeh A (2017) Evaluation of random forest classifier in security domain. Appl Intell 47(2):558–569. https://doi.org/10.1007/s10489-017-0907-2

    Article  Google Scholar 

  26. Laishram R, Virander Phoha V (2016) Curie: a method for protecting SVM classifier from poisoning attack. arXiv:1606.01584

  27. Li B, Vorobeychik Y (2014) Feature cross-substitution in adversarial classification. In: Advances in neural information processing systems, pp 2087–2095

  28. Liu Q, Li P, Zhao W, Cai W, Yu S, Leung VCM (2018) A survey on security threats and defensive techniques of machine learning: a data driven view. IEEE Access 6:12103–12117. https://doi.org/10.1109/ACCESS.2018.2805680

    Article  Google Scholar 

  29. Lowd D, Meek C (2005) Good word attacks on statistical spam filters. In: Proc.2nd conf. Email anti-spam, pp 1–8

  30. Ma Y, Xie T, Li J, Maciejewski R (2020) Explaining vulnerabilities to adversarial machine learning through visual analytics. IEEE Trans Vis Comput Graph 26 (1):1075–1085. https://doi.org/10.1109/TVCG.2019.2934631

    Article  Google Scholar 

  31. Moghaddam B, Jebara T, Pentland A (2000) Bayesian face recognition. Pattern Recogn 33(11):1771–1782. https://doi.org/10.1016/S0031-3203(99)00179-X

    Article  Google Scholar 

  32. Naveiro R, Redondo A, Insua DR, Ruggeri F (2019) Adversarial classification: an adversarial risk analysis approach. Int J Approx Reason 113:133–148. https://doi.org/10.1016/j.ijar.2019.07.003

    Article  MathSciNet  MATH  Google Scholar 

  33. Nelson B, Barreno M, Chi FJ, Joseph AD, Rubinstein BI, Saini U, Sutton CA, Tygar JD, Xia K (2008) Exploiting machine learning to subvert your spam filter. 8:1–9

  34. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp 506–519, https://doi.org/10.1145/3052973.3053009

  35. Patil T, Sherekar S (2013) Performance analysis of naive bayes and j48 classification algorithm for data classification. Int J Comput Sci Appl 6:256–261

    Google Scholar 

  36. Pattabiraman V, Parvathi R, Nedunchezian R, Palaniammal S (2009) A novel spatial clustering with obstacles and facilitators constraint based on edge detection and k-medoids. In: 2009 International conference on computer technology and development, vol 1. IEEE, pp 402–406

  37. Paudice A, Muñoz-González L, Lupu EC (2019) Label sanitization against label flipping poisoning attacks. In: ECML PKDD 2018 workshops. Springer International Publishing, Cham, pp 5–15

  38. Pitropakis N, Pikrakis A, Lambrinoudakis C (2014) Behaviour reflects personality: detecting co-residence attacks on xen-based cloud environments. Int J Inf Secur 14:299–305

    Article  Google Scholar 

  39. Rodrigues RN, Ling LL, Govindaraju V (2009) Robustness of multimodal biometric fusion methods against spoof attacks. J Vis Lang Comput 20(3):169–179. https://doi.org/10.1016/j.jvlc.2009.01.010

    Article  Google Scholar 

  40. Rusland NF, Wahid N, Kasim S, Hafit H (2017) Analysis of naive bayes algorithm for email spam filtering across multiple datasets. In: International research and innovation summit (IRIS2017), vol 226, https://doi.org/10.1088/1757-899X/226/1/012091

  41. Seiffert C, Khoshgoftaar TM, Hulse J. V., Folleco A (2014) An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Inf Sci 259:571–595. https://doi.org/10.1016/j.ins.2010.12.016

    Article  Google Scholar 

  42. Sáez JA, Luengo J, Herrera F (2016) Evaluating the classifier behavior with noisy data considering performance and robustness: the equalized loss of accuracy measure. Neurocomputing 176:26–35. https://doi.org/10.1016/j.neucom.2014.11.086

    Article  Google Scholar 

  43. Shanthini A, Vinodhini G, Chandrasekaran R, Supraja P (2019) A taxonomy on impact of label noise and feature noise using machine learning techniques. Soft Comput 23(18):8597–8607. https://doi.org/10.1007/s00500-019-03968-7

    Article  Google Scholar 

  44. Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 acm sigsac conference on computer and communications security, pp 1528–1540, https://doi.org/10.1145/2976749.2978392

  45. Taheri R, Javidan R, Shojafar M, Pooranian Z, Miri A, Conti M (2020) On defending against label flipping attacks on malware detection systems. Neural Comput Appl, 1–20

  46. Tretyakov K (2004) Machine learning techniques in spam filtering. In: Data mining problem-oriented seminar, MTAT, vol 3. Citeseer, pp 60–79

  47. Wittel GL, Wu SF (2004) On attacking statistical spam filters. In: Proc.1st conf. Email anti-spam, pp 1–7

  48. Xiao H, Biggio B, Nelson B, Xiao H, Eckert C, Roli F (2015) Support vector machines under adversarial label contamination. Neurocomputing 160:53–62. https://doi.org/10.1016/j.neucom.2014.08.081

    Article  Google Scholar 

  49. Xiao H, Stibor T, Eckert C (2012) Evasion attack of multi-class linear classifiers. In: Tan PN, Chawla S, Ho CK, Bailey J (eds) Advances in knowledge discovery and data mining. Springer, Berlin, pp 207–218, https://doi.org/10.1007/978-3-642-30217-6_18

  50. Xiao H, Xiao H, Eckert C (2012) Adversarial label flips attack on support vector machines. In: Proceedings of the 20th european conference on artificial intelligence, ECAI’12. IOS Press, NLD, pp 870–875, https://doi.org/10.3233/978-1-61499-098-7-870

  51. Zhang H (2005) Exploring conditions for the optimality of naive bayes. Int J Pattern Recognit Artif Intell 19(02):183–198. https://doi.org/10.1142/S0218001405003983

    Article  Google Scholar 

  52. Zhao M, An B, Gao W, Zhang T (2017) Efficient label contamination attacks against black-box learning models. In: IJCAI, pp 3945–3951, https://doi.org/10.24963/ijcai.2017/551

  53. Zhao M, An B, Kiekintveld C (2016) Optimizing personalized email filtering thresholds to mitigate sequential spear phishing attacks. In: Proceedings of the thirtieth AAAI conference on artificial intelligence, AAAI’16. AAAI Press, pp 658–664

Download references

Acknowledgements

This research was partly supported by the Integration of Cloud Computing and Big Integration of Cloud Computing and Big Data, Innovation of Science and Education (grant no. 2017A11017), the Key Research, Development, and Dissemination Program of Henan Province (Science and Technology for the People) (grant no. 182207310002), and the Key Science and Technology Project of Xinjiang Production and Construction Corps (grant no. 2018AB017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongpo Zhang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Cheng, N., Zhang, Y. et al. Label flipping attacks against Naive Bayes on spam filtering systems. Appl Intell 51, 4503–4514 (2021). https://doi.org/10.1007/s10489-020-02086-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-02086-4

Keywords

Navigation