Skip to main content

Trustworthy Machine Learning Approaches for Cyberattack Detection: A Review

  • Conference paper
  • First Online:
Book cover Computational Science and Its Applications – ICCSA 2022 Workshops (ICCSA 2022)

Abstract

In recent years, machine learning techniques have been utilized in sensitive areas such as health, medical diagnosis, facial recognition, cybersecurity, etc. With this exponential growth comes potential large-scale ethical, safety, and social ramifications. With this enhanced ubiquity and sensitivity, concerns about ethics, trust, transparency, and accountability inevitably arise. Given the threat of sophisticated cyberattacks, it’s critical to establish cybersecurity trustworthy concepts and to develop methodologies and concepts for a wide range of explainable machine cybersecurity models that will assure reliable threat identification and detection, more research is needed. This survey examines a variety of explainable machine learning techniques that can be used to implement a reliable cybersecurity infrastructure in the cybersecurity domain. The main aim of this study is to execute an in-depth review and identification of existing explainable machine learning algorithms for cyberattack detection. This study employed the seven-step survey model to determine the research domain, implement search queries, and compile all retrieved articles from digital databases. This research looks at the literature on trustworthy machine learning algorithms for detecting cyberattacks. An extensive search of electronic databases such as ArXiv, Semantic Scholar, IEEE Xplore, Wiley Library, Scopus, Google Scholar, ACM, and Springer was carried out to find relevant literature in the subject domain. From 2016 to 2022, this study looked at white papers, conference papers, and journals. Only 25 research papers were chosen for this research paper describing trustworthy cybersecurity and explainable AI cybersecurity after we retrieved 800 articles from web databases. The study reveals that the decision tree technique outperforms other state-of-the-art machine learning models in terms of transparency and interpretability. Finally, this research suggests that incorporating explainable into machine learning cybersecurity models will help uncover the root causes of defensive failures, making it easier for cybersecurity experts to enhance both cybersecurity infrastructures and development, rather than just model results, policy, and management.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2020). https://doi.org/10.3390/e23010018

    Article  Google Scholar 

  2. Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Can. J. Cardiol. 38(2), 204–213 (2022). https://doi.org/10.1016/j.cjca.2021.09.004

    Article  Google Scholar 

  3. Pienta, D., Tams, S., Thatcher, J.: Can trust be trusted in cybersecurity? In: Proceedings of the 53rd Hawaii International Conference on System Sciences (2020). https://doi.org/10.24251/hicss.2020.522

  4. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 82, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  5. Perarasi, T., Vidhya, S., Leeban Moses, M., Ramya, P.: Malicious vehicles identifying and trust management algorithm for enhance the security in 5G-VANET. In: Second International Conference on Inventive Research in Computing Applications (ICIRCA) (2020). https://doi.org/10.1109/icirca48905.2020.9183184

  6. Nassar, M., Salah, K., Rehman, M., Svetinovic, D.: Blockchain for explainable and trustworthy artificial intelligence. Wires Data Min. Knowl. Discov. 10(1), e1340 (2019). https://doi.org/10.1002/widm.1340

    Article  Google Scholar 

  7. Dosilovic, F., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In: 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) (2018). https://doi.org/10.23919/mipro.2018.8400040

  8. Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G.: Explainable AI for designers: a human-centered perspective on mixed-initiative co-creation. In: 2018 IEEE Conference on Computational Intelligence and Games (CIG) (2018). https://doi.org/10.1109/cig.2018.8490433

  9. Keeling, G.: Why trolley problems matter for the ethics of automated vehicles. Sci. Eng. Ethics 26(1), 293–307 (2019). https://doi.org/10.1007/s11948-019-00096-1

    Article  Google Scholar 

  10. Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020). https://doi.org/10.1109/access.2020.2988359

    Article  Google Scholar 

  11. Mahbooba, B., Timilsina, M., Sahal, R., Serrano, M.: Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity 2021, 1–11 (2021). https://doi.org/10.1155/2021/6634811

    Article  Google Scholar 

  12. Marino, D., Wickramasinghe, C., Manic, M.: An adversarial approach for explainable AI in intrusion detection systems. In: IECON 44th Annual Conference of the IEEE Industrial Electronics Society (2018). https://doi.org/10.1109/iecon.2018.8591457

  13. Daghistani, T., Elshawi, R., Sakr, S., Ahmed, A., Al-Thwayee, A., Al-Mallah, M.: Predictors of in-hospital length of stay among cardiac patients: a machine learning approach. Int. J. Cardiol. 288, 140–147 (2019). https://doi.org/10.1016/j.ijcard.2019.01.046

    Article  Google Scholar 

  14. Jones, Y., Deligianni, F., Dalton, J.: Improving ECG classification interpretability using saliency maps. In: IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE) (2020). https://doi.org/10.1109/bibe50027.2020.00114

  15. Elshawi, R., Al-Mallah, M., Sakr, S.: On the interpretability of machine learning-based model for predicting hypertension. BMC Med. Inform. Decis. Making 19(1) (2019). doi: https://doi.org/10.1186/s12911-019-0874-0

  16. Gilpin, L., Bau, D., Yuan, B., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (2018). https://doi.org/10.1109/dsaa.2018.00018

  17. Carvalho, D., Pereira, E., Cardoso, J.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019). https://doi.org/10.3390/electronics8080832

    Article  Google Scholar 

  18. Guo, W.: Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun. Mag. 58(6), 39–45 (2020). https://doi.org/10.1109/mcom.001.2000050

    Article  Google Scholar 

  19. Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., Doshi-Velez, F.: Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of AAAI Conference on Artificial Intelligence, New Orleans, LA, USA (2018)

    Google Scholar 

  20. Zahavy, T., Zrihem, N., Mannor, S.: Graying the black box: understanding DQNs (2022). https://doi.org/10.48550/arXiv.1602.02658. Accessed 12 May 2021

  21. Luong, N., et al.: Applications of deep reinforcement learning in communications and networking: a survey. IEEE Commun. Surv. Tutor. 21(4), 3133–3174 (2019). https://doi.org/10.1109/comst.2019.2916583

    Article  Google Scholar 

  22. Huawei: AI Security White Paper (2018). https://www-file.huawei.com/-/media/corporate/pdf/trust-center/ai-securitywhitepaper.pdf. Accessed 2 Jan 2021

  23. Gunning, D.: DARPA’s explainable artificial intelligence (XAI) program. In: Proceedings of the 24th International Conference on Intelligent User Interfaces (2019). https://doi.org/10.1145/3301275.3308446

  24. Castelvecchi, D.: Can we open the black box of AI? Nature 538(7623), 20–23 (2016). https://doi.org/10.1038/538020a

    Article  Google Scholar 

  25. Sarhan, M., Layeghy, S., Portmann, M.: Evaluating standard feature sets towards increased generalisability and explainability of ML-based network intrusion detection (2021). https://doi.org/10.48550/arXiv.2104.07183. Accesssed 9 Jan 2022

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Blessing Guembe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guembe, B., Azeta, A., Misra, S., Ahuja, R. (2022). Trustworthy Machine Learning Approaches for Cyberattack Detection: A Review. In: Gervasi, O., Murgante, B., Misra, S., Rocha, A.M.A.C., Garau, C. (eds) Computational Science and Its Applications – ICCSA 2022 Workshops. ICCSA 2022. Lecture Notes in Computer Science, vol 13381. Springer, Cham. https://doi.org/10.1007/978-3-031-10548-7_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10548-7_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10547-0

  • Online ISBN: 978-3-031-10548-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics