Skip to main content

Explanation-Driven Characterization of Android Ransomware

  • Conference paper
  • First Online:
Book cover Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Abstract

Machine learning is currently successfully used for addressing several cybersecurity detection and classification tasks. Typically, such detectors are modeled through complex learning algorithms employing a wide variety of features. Although these settings allow achieving considerable performances, gaining insights on the learned knowledge turns out to be a hard task. To address this issue, research efforts on the interpretability of machine learning approaches to cybersecurity tasks is currently rising. In particular, relying on explanations could improve prevention and detection capabilities since they could help human experts to find out the distinctive features that truly characterize malware attacks. In this perspective, Android ransomware represents a serious threat. Leveraging state-of-the-art explanation techniques, we present a first approach that enables the identification of the most influential discriminative features for ransomware characterization. We propose strategies to adopt explanation techniques appropriately and describe ransomware families and their evolution over time. Reported results suggest that our proposal can help cyber threat intelligence teams in the early detection of new ransomware families, and could be applicable to other malware detection systems through the identification of their distinctive features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    MD5: 8a7fea6a5279e8f64a56aa192d2e7cf0.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  2. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems 31, pp. 9505–9515. Curran Associates, Inc., October 2018

    Google Scholar 

  3. Alber, M., et al.: iNNvestigate neural networks! August 2018. http://arxiv.org/abs/1808.04260

  4. Ancona, M.: DeepExplain. https://github.com/marcoancona/DeepExplain

  5. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 169–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_9

    Chapter  Google Scholar 

  6. Barredo Arrieta, A., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58(2019), 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

  7. Demetrio, L., Biggio, B., Lagorio, G., Roli, F., Armando, A.: Explaining vulnerabilities of deep learning to adversarial malware binaries. In: Proceedings of the Third Italian Conference on Cyber Security 2019. CEUR-WS.org, Pisa, February 2019

    Google Scholar 

  8. Ghorbani, A., Wexler, J., Zou, J., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems. pp. 9273–9282. Curran Associates Inc., Vancouver, February 2019

    Google Scholar 

  9. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Advances in Neural Information Processing Systems 32 (NIPS 2019), pp. 125–136. Curran Associates Inc., Vancouver (2019)

    Google Scholar 

  10. Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: 35th International Conference on Machine Learning (ICML 2018), vol. 80, pp. 2668–2677. Stockholm, July 2018

    Google Scholar 

  11. Lage, I., et al.: An evaluation of the human-interpretability of explanation. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, pp. 59–67. AAAI Press, Honolulu, January 2019

    Google Scholar 

  12. Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018). https://doi.org/10.1145/3236386.3241340

  13. Maiorca, D., Mercaldo, F., Giacinto, G., Visaggio, C.A., Martinelli, F.: R-PackDroid: API package-based characterization and detection of mobile ransomware. In: Proceedings of the Symposium on Applied Computing - SAC ’17, pp. 1718–1723. ACM Press, New York (2017). https://doi.org/10.1145/3019612.3019793, http://dl.acm.org/citation.cfm?doid=3019612.3019793

  14. Melis, M., Demontis, A., Pintor, M., Sotgiu, A., Biggio, B.: secml: a python library for secure and explainable machine learning. arXiv preprint arXiv:1912.10013 (12 2019). http://arxiv.org/abs/1912.10013

  15. Pendlebury, F., Pierazzi, F., Jordaney, R., Kinder, J., Cavallaro, L.: TESSERACT: eliminating experimental bias in malware classification across space and time. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 729–746. USENIX Association, Santa Clara, August 2019

    Google Scholar 

  16. Preece, A., Harborne, D., Braines, D., Tomsett, R., Chakraborty, S.: Stakeholders in explainable AI. In: AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector. Arlington, Virginia, USA (2018)

    Google Scholar 

  17. Scalas, M., Maiorca, D., Mercaldo, F., Visaggio, C.A., Martinelli, F., Giacinto, G.: On the effectiveness of system API-related information for Android ransomware detection. Comput. Secur. 86, 168–182 (2019). https://doi.org/10.1016/j.cose.2019.06.004, https://linkinghub.elsevier.com/retrieve/pii/S0167404819301178

  18. Sebastián, M., Rivera, R., Kotzias, P., Caballero, J.: AVclass: a tool for massive malware labeling. In: Monrose, F., Dacier, M., Blanc, G., Garcia-Alfaro, J. (eds.) RAID 2016. LNCS, vol. 9854, pp. 230–253. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45719-2_11

    Chapter  Google Scholar 

  19. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 4844–4866. JMLR.org, Sydney, NSW, Australia, May 2016. https://doi.org/10.5555/3305890.3306006

  20. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, pp. 3319–3328. JMLR.org, Sidney, March 2017. http://arxiv.org/abs/1703.01365

  21. Warnecke, A., Arp, D., Wressnegger, C., Rieck, K.: Evaluating explanation methods for deep learning in security. In: 5th IEEE European Symposium on Security and Privacy (Euro S&P 2020), Genova, September 2020

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michele Scalas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Scalas, M., Rieck, K., Giacinto, G. (2021). Explanation-Driven Characterization of Android Ransomware. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68796-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68795-3

  • Online ISBN: 978-3-030-68796-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics