Abstract
Rich and fine personal and enterprise data are being collected and recorded, so as to provide big data support for personal and enterprise credit evaluation and credit integration. In this process, the problem of data privacy is becoming more and more prominent. For example, the user’s location data may be used to infer address, behavior and activity, and the user’s service use records may reveal information such as gender, age and disease. How to protect data privacy while meeting the needs of credit evaluation and credit integration is one of the great challenges in the era of big data. In this paper, we propose a security privacy protection framework in artificial intelligence algorithm, analyze the possibility of privacy leakage from data privacy security, model privacy security and environment privacy security, and give the corresponding defense strategy. In addition, this paper also gives an example algorithm of data security level, which can ensure the security of data privacy.
Supported by National Key R&D Program (Grant Nos. 2019YFB1404602).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)
Tramr, F., Kurakin, A., Papernot, N., et al.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)
Madry, A., Makelov, A., Schmidt, L., et al.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Kannan, H., Kurakin, A., Goodfellow, I.: Adversarial logit pairing. arXiv preprint arXiv:1803.06373 (2018)
Huang, R., Xu, B., Schuurmans, D., et al.: Learning with a strong adversary. arXiv preprint arXiv:1511.03034 (2015)
Deng, J., Dong, W., Socher, R., et al.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Papernot, N., McDaniel, P., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
Papernot, N., McDaniel, P.: Extending defensive distillation. arXiv preprint arXiv:1705.05264 (2017)
Metzen, J.H., Genewein, T., Fischer, V., et al.: On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 (2017)
Grosse, K., Manoharan, P., Papernot, N., et al.: On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017)
Feinman, R., Curtin, R.R., Shintre, S., et al.: Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410 (2017)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)
Fan, W., Sun, G., Su, Y., et al.: Integration of statistical detector and Gaussian noise injection detector for adversarial example detection in deep neural networks. Multimed. Tools Appl. 78(14), 20409–20429 (2019)
Osadchy, M., Hernandez-Castro, J., Gibson, S., et al.: No bot expects the DeepCAPTCHA! Introducing immutable adversarial examples, with applications to CAPTCHA generation. IEEE Trans. Inf. Forensics Secur. 12(11), 2640–2653 (2017)
Graese, A., Rozsa, A., Boult, T.E.: Assessing threat of adversarial examples on deep neural networks. In: 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 69–74. IEEE (2016)
LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Das, N., Shanbhogue, M., Chen, S.T., et al.: Keeping the bad guys out: protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900 (2017)
Meng, D., Chen, H.: MagNet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)
Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014)
Wang, Q., Guo, W., Zhang, K., et al.: Learning adversary-resistant deep neural networks. arXiv preprint arXiv:1612.01401 (2016)
Guo, C., Rana, M., Cisse, M., et al.: Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017)
Xie, C., Wang, J., Zhang, Z., et al.: Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991 (2017)
Jia, X., Wei, X., Cao, X., et al.: ComDefend: an efficient image compression model to defend adversarial examples. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6084–6092 (2019)
Xie, C., Wu, Y., Maaten, L., et al.: Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 501–509 (2019)
Yuan, J., He, Z.: Generative cleaning networks with quantized nonlinear transform for deep neural network defense (2019)
Fan, W., Sun, G., Su, Y., et al.: Hybrid defense for deep neural networks: an integration of detecting and cleaning adversarial perturbations. In: 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 210–215. IEEE (2019)
Doan, B.G., Abbasnejad, E., Ranasinghe, D.C.: Februus: Input purification defense against trojan attacks on deep neural network systems. In: Annual Computer Security Applications Conference, pp. 897–912 (2020)
Sun, G., Su, Y., Qin, C., et al.: Complete defense framework to protect deep neural networks against adversarial examples. Math. Probl. Eng. 2020, 17 (2020)
Su, Y., Sun, G., Fan, W., et al.: Cleaning adversarial perturbations via residual generative network for face verification. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2019, pp. 2597–2601. IEEE (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Lv, C., Zhang, X., Sun, Z. (2021). Privacy Protection Framework for Credit Data in AI. In: Liu, Z., Wu, F., Das, S.K. (eds) Wireless Algorithms, Systems, and Applications. WASA 2021. Lecture Notes in Computer Science(), vol 12938. Springer, Cham. https://doi.org/10.1007/978-3-030-86130-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-86130-8_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86129-2
Online ISBN: 978-3-030-86130-8
eBook Packages: Computer ScienceComputer Science (R0)