Skip to main content

Advertisement

Log in

Certified unlearning for a trustworthy machine learning-based access control administration

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

With the swift and increasing complexity of contemporary distributed software systems, there is a pressing demand for access control methods that are effective, scalable, and secure. In response, Machine Learning (ML) has been proposed to complement manually crafted authorisation policies to better handle the dynamic and constantly evolving nature of such software systems and detect unusual access requests. As systems evolve, so do the conditions under which access is granted. Validating access control policy updates is imperative to prevent unauthorised access to the system. While modifying traditional rule-based access control policies is relatively straightforward, the administration of Machine Learning-based Access Control (MLBAC) presents a substantial security challenge. This paper examines the trustworthiness of the administration of MLBAC systems through certified machine unlearning for reverting previous policies and correcting misbehaviour. More specifically, we address the security concerns of employing ML as a complementary access control mechanism by exploring exact and approximate unlearning and evaluating its impact using real-world data. We demonstrate the effectiveness and security of unlearning in both reverting policies and addressing vulnerabilities that may emerge during the model’s life cycle. The promising results serve to address one of the primary challenges associated with MLBAC systems and contribute to a future wider acceptance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Abadi, M., Chu, A., Goodfellow, I., McMahan, HB., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318. (2016)

  2. Alohaly, M., Takabi, H., Blanco, E.: Towards an automated extraction of abac constraints from natural language policies. In: IFIP international conference on ICT systems security and privacy protection, pp. 105–119. Springer (2019)

  3. Amazon, Kaggle.: Amazon.com - Employee access challenge. https://www.kaggle.com/competitions/amazon-employee-access-challenge/data (2014). Accessed 31 Oct 2023

  4. Argento, L., Margheri, A., Paci, F., Sassone, V., Zannone, N.: Towards adaptive access control. In: IFIP annual conference on data and applications security and privacy, pp. 99–109. Springer (2018)

  5. Balle, B., Wang, YX.: Improving the gaussian mechanism for differential privacy: analytical calibration and optimal denoising. In: International conference on machine learning, pp. 394–403. PMLR (2018)

  6. Bauer, L., Cranor, LF., Reeder, RW., Reiter, MK., Vaniea, K.: Real life challenges in access-control management. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp. 899–908. (2009)

  7. Bauer, L., Garriss, S., Reiter, M.K.: Detecting and resolving policy misconfigurations in access-control systems. ACM Trans. Inf. Syst. Secur. (TISSEC) 14(1), 1–28 (2011)

    MATH  Google Scholar 

  8. Bibal, A., Lognoul, M., De Streel, A., Frénay, B.: Legal requirements on explainability in machine learning. Artif. Intell. Law 29, 149–169 (2021)

    MATH  Google Scholar 

  9. Bourtoule, L., Chandrasekaran, V., Choquette-Choo, CA., Jia, H., Travers, A., Zhang, B., Lie, D., Papernot, N.: Machine unlearning. In: 2021 IEEE symposium on security and privacy (SP), pp. 141–159. IEEE (2021)

  10. Brophy, J., Lowd, D.: Machine unlearning for random forests. In: International conference on machine learning, pp. 1092–1104. PMLR (2021)

  11. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE symposium on security and privacy, pp. 463–480. IEEE (2015)

  12. Chaudhuri, K., Monteleoni, C., Sarwate, AD.: Differentially private empirical risk minimization. J. Mach. Learn. Res. 12(3) (2011)

  13. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)

    MATH  Google Scholar 

  14. Department for Science, Innovation and Technology: A pro-innovation approach to AI regulation. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (2023). Accessed 31 Oct 2023

  15. Dwork, C.: Differential privacy. In: International colloquium on automata, languages, and programming, pp. 1–12. Springer (2006)

  16. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of cryptography: third theory of cryptography conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pp. 265–284. Springer (2006)

  17. European Commission: On artificial intelligence–a European approach to excellence and trust. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065 (2020). Accessed 31 Oct 2023

  18. European Parliament, Council of the European Union: General data protection regulation (GDPR). https://eur-lex.europa.eu/eli/reg/2016/679/oj (2016). Accessed 31 Oct 2023

  19. Ferraiolo, D., Cugini, J., Kuhn, DR., et al.: Role-based access control (rbac): Features and motivations. In: Proceedings of 11th annual computer security application conference, pp. 241–48. (1995)

  20. Fisler, K., Krishnamurthi, S., Meyerovich, LA., Tschantz, MC.: Verification and change-impact analysis of access-control policies. In: Proceedings of the 27th international conference on Software engineering, pp. 196–205. (2005)

  21. French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cognit. Sci. 3(4), 128–135 (1999)

    MATH  Google Scholar 

  22. Gilpin, LH., Bau, D., Yuan, BZ., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International conference on data science and advanced analytics (DSAA), pp. 80–89. IEEE (2018)

  23. Ginart, A., Guan, M., Valiant, G., Zou, JY.: Making ai forget you: data deletion in machine learning. Adv Neural Inf Process Syst. 32 (2019)

  24. Google, AI.: Google responsible AI practices. https://ai.google/responsibility/responsible-ai-practices/ (2023). Accessed 31 Oct 2023

  25. Graves, L., Nagisetty, V., Ganesh, V.: Amnesiac machine learning. Proc. AAAI Conf. Artif. Intell. 35, 11516–11524 (2021)

    Google Scholar 

  26. Gumma, V., Mitra, B., Dey, S., Patel, PS., Suman, S., Das, S.: Pammela: policy administration methodology using machine learning. arXiv preprint arXiv:2111.07060 (2021)

  27. Guo, C., Goldstein, T., Hannun, A., Van Der Maaten, L.: Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 (2019)

  28. Hleg, A.: Ethics guidelines for trustworthy ai. B-1049 Brussels (2019)

  29. Hu, V.: Machine learning for access control policy verification. Tech. rep, Technical Report (2021)

    MATH  Google Scholar 

  30. Hu, V.C., Kuhn, R.: Access control policy verification. Computer 49(12), 80–83 (2016)

    MATH  Google Scholar 

  31. Hu, V.C., Kuhn, D.R., Ferraiolo, D.F., Voas, J.: Attribute-based access control. Computer 48(2), 85–88 (2015)

    MATH  Google Scholar 

  32. Huang, L., Joseph, AD., Nelson, B., Rubinstein, BI., Tygar, JD.: Adversarial machine learning. In: Proceedings of the 4th ACM workshop on security and artificial intelligence, pp. 43–58. (2011)

  33. IBM Research: Trustworthy AI | IBM Research. https://research.ibm.com/topics/trustworthy-ai (2023). Accessed 31 Oct 2023

  34. Izzo, Z., Smart, MA., Chaudhuri, K., Zou, J.: Approximate data deletion from machine learning models. In: International conference on artificial intelligence and statistics, pp. 2008–2016. PMLR (2021)

  35. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: 2018 IEEE symposium on security and privacy (SP), pp. 19–35. IEEE (2018)

  36. Jha, S., Li, N., Tripunitara, M., Wang, Q., Winsborough, W.: Towards formal verification of role-based access control policies. IEEE Trans. Dependable Secur. Comput. 5(4), 242–255 (2008)

    Google Scholar 

  37. Karimi, L., Abdelhakim, M., Joshi, J.: Adaptive abac policy learning: a reinforcement learning approach. arXiv preprint arXiv:2105.08587 (2021)

  38. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl Acad. Sci. 114(13), 3521–3526 (2017)

    MathSciNet  MATH  Google Scholar 

  39. Kuhn, D.R., Hu, V., Ferraiolo, D.F., Kacker, R.N., Lei, Y.: Pseudo-exhaustive testing of attribute based access control rules. In: 2016 IEEE Ninth international conference on software testing, pp. 51–58. IEEE, Verification and Validation Workshops (ICSTW) (2016)

  40. Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J., Zhou, B.: Trustworthy AI: from principles to practices. ACM Comput. Surv. 55(9), 1–46 (2023)

    MATH  Google Scholar 

  41. Liu, A., Du, X., Wang, N.: Efficient access control permission decision engine based on machine learning. Secur. Commun. Netw. 2021 (2021)

  42. Llamas, JM., Preuveneers, D., Joosen, W.: Effective machine learning-based access control administration through unlearning. In: 2023 IEEE European symposium on security and privacy workshops (EuroS &PW), pp. 50–57. IEEE (2023)

  43. Madry, A., Makelov, A,. Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  44. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)

    MATH  Google Scholar 

  45. Meta: Responsible AI - AI at Meta. https://ai.meta.com/responsible-ai/ (2023). Accessed 31 Oct 2023

  46. Montanez, K.: Amazon access samples. UCI machine learning repository, DOI: https://doi.org/10.24432/C5JW2K. (2011)

  47. Neel, S., Roth, A., Sharifi-Malvajerdi, S.: Descent-to-delete: gradient-based methods for machine unlearning. In: Algorithmic learning theory, pp. 931–962. PMLR (2021)

  48. Ng, AY.: Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In: Proceedings of the twenty-first international conference on Machine learning, pp. 78 (2004)

  49. Nobi, MN., Krishnan, R., Huang, Y., Sandhu, R.: Administration of machine learning based access control. In: Computer security–ESORICS 2022: 27th European symposium on research in computer security, Copenhagen, Denmark, September 26–30, 2022, Proceedings, Part II, pp. 189–210. Springer (2022a)

  50. Nobi, MN., Krishnan, R., Huang, Y., Shakarami, M., Sandhu, R.: Toward deep learning based access control. In: Proceedings of the twelveth ACM conference on data and application security and privacy, pp. 143–154. (2022b)

  51. OpenAI: Our approach to AI safety. https://openai.com/blog/our-approach-to-ai-safety (2023). Accessed 31 Oct 2023

  52. Osborn, S.: Mandatory access control and role-based access control revisited. In: Proceedings of the second ACM workshop on Role-based access control, pp. 31–40. (1997)

  53. OWASP Foundation: OWASP Top Ten. https://owasp.org/www-project-top-ten/ (2021). Accessed 01 Dec 2023

  54. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)

  55. Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., Floridi, L.: The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI Soc. 36, 59–77 (2021)

    Google Scholar 

  56. Sandhu, R., Munawer, Q.: How to do discretionary access control using roles. In: Proceedings of the third ACM workshop on Role-based access control, pp. 47–54. (1998)

  57. Schelter, S., Grafberger, S., Dunning, T.: Hedgecut: maintaining randomised trees for low-latency machine unlearning. In: Proceedings of the 2021 international conference on management of Data, pp. 1545–1557. (2021)

  58. Sekhari, A., Acharya, J., Kamath, G., Suresh, AT.: Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems 34:18,075–18,086 (2021)

  59. Servos, D., Osborn, S.L.: Current research and open problems in attribute-based access control. ACM Comput. Surv. (CSUR) 49(4), 1–45 (2017)

    MATH  Google Scholar 

  60. Warnecke, A., Pirch, L., Wressnegger, C., Rieck, K.: Machine unlearning of features and labels. arXiv preprint arXiv:2108.11577. (2021)

  61. Xu, T., Naing, HM., Lu, L., Zhou, Y.: How do system administrators resolve access-denied issues in the real world? In: Proceedings of the 2017 CHI conference on human factors in computing systems, pp. 348–361 (2017)

  62. Xu, Z., Stoller, S.D.: Mining attribute-based access control policies. IEEE Trans. Dependable Secur. Comput. 12(5), 533–545 (2014)

    MATH  Google Scholar 

Download references

Funding

This research is partially funded by the Research Fund KU Leuven, and by the Flemish Research Programme Cybersecurity. This paper was also partially supported by the AIDE project funded by the Belgian SPF BOSA under the programme “Financing of projects for the development of artificial intelligence in Belgium” with reference number 06.40.32.33.00.10.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javier Martínez Llamas.

Ethics declarations

Conflict of interest

All authors declare that they have no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martínez Llamas, J., Preuveneers, D. & Joosen, W. Certified unlearning for a trustworthy machine learning-based access control administration. Int. J. Inf. Secur. 24, 94 (2025). https://doi.org/10.1007/s10207-025-01003-5

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s10207-025-01003-5

Keywords