Skip to main content

Advertisement

Log in

Privacy preserving machine unlearning for smart cities

  • Published:
Annals of Telecommunications Aims and scope Submit manuscript

Abstract

Due to emerging concerns about public and private privacy issues in smart cities, many countries and organizations are establishing laws and regulations (e.g., GPDR) to protect the data security. One of the most important items is the so-called The Right to be Forgotten, which means that these data should be forgotten by all inappropriate use. To truly forget these data, they should be deleted from all databases that cover them, and also be removed from all machine learning models that are trained on them. The second one is called machine unlearning. One naive method for machine unlearning is to retrain a new model after data removal. However, in the current big data era, this will take a very long time. In this paper, we borrow the idea of Generative Adversarial Network (GAN), and propose a fast machine unlearning method that unlearns data in an adversarial way. Experimental results show that our method produces significant improvement in terms of the forgotten performance, model accuracy, and time cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. A CCC, Florian T, Nicholas C, et al (2021) Label-only membership inference attacks. In: International conference on machine learning, PMLR, pp 1964–1974

  2. Aditya G, Alessandro A, Stefano S (2020) Eternal sunshine of the spotless net: Selective forgetting in deep networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 9304–9312

  3. Ahmed S, Yang Z, Mathias H, et al (2019) Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed Systems Security Symposium 2019, Internet Society

  4. Bang W, Xiangwen Y, Shirui P, et al (2021) Adapting membership inference attacks to gnn for graph classification: Approaches and implications. In: 2021 IEEE International Conference on Data Mining (ICDM), IEEE, pp 1421–1426

  5. Blanc G, Liu Y, Lu R et al (2022) Interactions between artificial intelligence and cybersecurity to protect future networks. Annals of Telecommunications 77:727–729

    Article  Google Scholar 

  6. (CCPA) CCPA (2018) https://oag.ca.gov/privacy/ccpa

  7. Charmet F, Tanuwidjaja HC, Ayoubi S et al (2022) Explainable artificial intelligence for cybersecurity: a literature survey. Annals of Telecommunications 77:789–812

    Article  Google Scholar 

  8. Chen K, Tan G (2019) Bikegps: Localizing shared bikes in street canyons with low-level GPS cooperation. ACM Trans Sens Networks 15(4):45:1–45:28

  9. Chen K, Tan G (2021) Satprobe: Low-energy and fast indoor/outdoor detection via satellite existence sensing. IEEE Trans Mob Comput 20(3):1198–1211

    Article  Google Scholar 

  10. Chen K, Tan G, Lu M et al (2016) CRSM: a practical crowdsourcing-based road surface monitoring system. Wirel Networks 22(3):765–779

    Article  Google Scholar 

  11. Chen K, Tan G, Cao J et al (2020) Modeling and improving the energy performance of gps receivers for location services. IEEE Sensors Journal 20(8):4512–4523

    Article  Google Scholar 

  12. Chuan G, Tom G, Awni H, et al (2020) Certified data removal from machine learning models. In: International Conference on Machine Learning, PMLR, pp 3832–3842

  13. Oi E, Wolfgang N, Megha K (2021) Membership inference attack on graph neural networks. 2021 Third IEEE International Conference on Trust. Privacy and Security in Intelligent Systems and Applications (TPS-ISA), IEEE, pp 11–20

    Google Scholar 

  14. Gaoyang L, Yang Y, Xiaoqiang M, et al (2020) Federated unlearning. arXiv preprint arXiv:2012.13891

  15. Han X, Kashif R, Roland V (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747

  16. Hongbin L, Jinyuan J, Wenjie Q, et al (2021) Encodermi: Membership inference against pre-trained encoders in contrastive learning. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp 2081–2095

  17. Hou R, Ai S, Chen Q et al (2022) Similarity-based integrity protection for deep learning systems. Inf Sci 601:255–267

    Article  Google Scholar 

  18. Ian G, Jean PA, Mehdi M, et al (2014) Generative adversarial nets. Advances in neural information processing systems 27

  19. Ishaan G, Faruk A, Martin A, et al (2017) Improved training of wasserstein gans. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp 5769–5779

  20. Jiale C, Jiale Z, Yanchao Z, et al (2020) Beyond model-level membership privacy leakage: an adversarial approach in federated learning. In: 2020 29th International Conference on Computer Communications and Networks (ICCCN), IEEE, pp 1–9

  21. Jingwen Z, Jiale Z, Junjun C, et al (2020) Gan enhanced membership inference: A passive local attack in federated learning. In: ICC 2020-2020 IEEE International Conference on Communications (ICC), IEEE, pp 1–6

  22. Jonathan B, Daniel L (2021) Machine unlearning for random forests. In: International Conference on Machine Learning, PMLR, pp 1092–1104

  23. Kim T, Jerath K (2022) Congestion-aware cooperative adaptive cruise control for mitigation of self-organized traffic jams. IEEE Trans Intell Transp Syst 23(7):6621–6632

    Article  Google Scholar 

  24. Li Y, Yan H, Huang T, et al (2022) Model architecture level privacy leakage in neural networks. Science China Information Sciences

  25. Liu J, Zhang Q, Mo K et al (2022) An efficient adversarial example generation algorithm based on an accelerated gradient iterative fast gradient. Comput Stand Interfaces 82(103):612

    Google Scholar 

  26. Liwei S, Prateek M (2021) Systematic evaluation of privacy risks of machine learning models. In: 30th USENIX Security Symposium (USENIX Security 21), pp 2615–2632

  27. Lucas B, Varun C, A CCC, et al (2021) Machine unlearning. In: 2021 IEEE Symposium on Security and Privacy (SP), IEEE, pp 141–159

  28. Martin A, Leon B (2017) Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862

  29. Martin A, Soumith C, Leon B (2017) Wasserstein generative adversarial networks. In: International conference on machine learning, PMLR, pp 214–223

  30. Matt F, Somesh J, Thomas R (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp 1322–1333

  31. Milad N, Reza S, Amir H (2019) Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP), IEEE, pp 739–753

  32. Nicholas C, Chang L, Ulfar E, et al (2019) The secret sharer: Evaluating and testing unintended memorization in neural networks. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp 267–284

  33. Paul V, dem Bussche Axel V (2017) The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed, Cham: Springer International Publishing 10:3152,676

  34. Reza S, Marco S, Congzheng S, et al (2017) Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), IEEE, pp 3–18

  35. Samuel Y, Irene G, Matt F, et al (2018) Privacy risk in machine learning: Analyzing the connection to overfitting. In: 2018 IEEE 31st computer security foundations symposium (CSF), IEEE, pp 268–282

  36. TONY G, MELODY G, GREG V, et al (2019) Making ai forget you: Data deletion in machine learning. Advances in Neural Information Processing Systems

  37. Vitaly F (2020) Does learning require memorization? a short tale about a long tail. In: Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pp 954–959

  38. Wang Y, Chen K, an Tan Y, et al (2022) Stealthy and flexible trojan in deep learning framework. IEEE Transactions on Dependable and Secure Computing

  39. Yinzhi C, Junfeng Y (2015) Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy, IEEE, pp 463–480

  40. Yuval N, Tao W, Adam C, et al (2011) Reading digits in natural images with unsupervised feature learning

  41. Zachary I, Anne SM, Kamalika C, et al (2021) Approximate data deletion from machine learning models. In: International Conference on Artificial Intelligence and Statistics, PMLR, pp 2008–2016

  42. Zhao B, Mopuri KR, Bilen H (2020) idlg: Improved deep leakage from gradients. CoRR abs/2001.02610

  43. Zhu E, Zhang J, Yan J et al (2022) N-gram malgan: Evading machine learning detection via feature n-gram. Digital Communications and Networks 8(4):485–491

    Article  Google Scholar 

  44. Zhu L, Liu Z, Han S (2019) Deep leakage from gradients. In: Wallach HM, Larochelle H, Beygelzimer A, et al. (eds) Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8–14, 2019, Vancouver, BC, Canada, pp 14747–14756

Download references

Funding

This work is supported by National Natural Science Foundation of China (No. 61802383), Research Project of Pazhou Lab for Excellent Young Scholars (No. PZL2021KF0024, No. PZL2021KF0002), Guangzhou Basic and Applied Basic Research Foundation (No. 202201010330, No. 202201020162, No. 202201020221), Guangdong Philosophy and Social Science Planning Project (No. GD19YYJ02), Guangdong Regional Joint Fund Project (No. 2022A1515110157), Research on the Supporting Technologies of the Metaverse in Cultural Media (No. PT252022039), Jiangsu Key Laboratory of Media Design and Software Technology (No. 21ST0202), Shaanxi Key Laboratory of Blockchain and Secure Computing, Guangzhou University Research Project (grant no. PT252022039), and Innovation Research for the Postgraduates of Guangzhou University (No. 2021GDJC-M33).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kongyang Chen.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, K., Huang, Y., Wang, Y. et al. Privacy preserving machine unlearning for smart cities. Ann. Telecommun. 79, 61–72 (2024). https://doi.org/10.1007/s12243-023-00960-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12243-023-00960-z

Keywords

Navigation