Abstract
Convolutional Neural Networks (CNNs) have been widely used in various areas. As the training of CNNs requires powerful computing resources, data owners are now employing clouds to accomplish the task. However, this inevitably introduces serious privacy issues against the data owners, as the training images are now outsourced to the clouds, who may illegally spy on the content of the images for potential benefit. In this work, we propose HeHe, a CNN training framework over encrypted images with practical efficiency via additively homomorphic encryption and a delicate interaction scheme in CryptoHeader, which are shallow layers of the network. To evaluate whether the image content is preserved through a processing system, we propose \((\alpha ,\beta )\)-recoverable, a novel image privacy model, and theoretically prove HeHe is robust against it. We test HeHe on several datasets in the aspects of accuracy, efficiency, and privacy. The empirical study justifies that HeHe is practical for the CNN training over encrypted images while preserving the accuracy with acceptable training cost and content leakage.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Although \(\lambda \) cannot be estimated accurately in CNNs, it has been empirically studied and justified to be bounded by 0.25 [10].
- 2.
python-paillier, https://github.com/n1analytics/python-paillier.
- 3.
We do not show the results in large-size datasets due to the limited computing resources we have currently.
- 4.
As [39] does not provide enough details for reproduction, we can only list the cost reported in their paper for reference. Besides, they only reported the cost of inference, which is equivalent to the forward propagation.
References
Acar, A., Aksu, H., Uluagac, A.S., Conti, M.: A survey on homomorphic encryption schemes: theory and implementation. ACM Comput. Surv. 51(4), 79:1–79:35 (2018)
Bost, R., Popa, R.A., Tu, S., Goldwasser, S.: Machine learning classification over encrypted data. In: Proceedings of 22nd Annual Network and Distributed System Security Symposium (2015)
Boureau, Y., Bach, F.R., LeCun, Y., Ponce, J.: Learning mid-level features for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2559–2566 (2010)
Bourse, F., Minelli, M., Minihold, M., Paillier, P.: Fast homomorphic evaluation of deep discretized neural networks. In: Proceedings of 38th Annual International Cryptology Conference on Advances in Cryptology, pp. 483–512 (2018)
Central Bureau of the Commission Internationale de l’Éclairage (Vienna, Austria): Cie (1978) recommendations on uniform color spaces, color-difference equations, and metric color terms. Supplement 2 to CIE publication 15 (E1.3.1) 1971/(TC1.3) (1978)
Chabanne, H., de Wargny, A., Milgram, J., Morel, C., Prouff, E.: Privacy-preserving classification on deep neural network. IACR Cryptology ePrint Archive 2017, 35 (2017)
Chou, E., Beal, J., Levy, D., Yeung, S., Haque, A., Fei-Fei, L.: Faster cryptonets: leveraging sparsity for real-world encrypted inference. arXiv:1811.09953 (2018)
Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4829–4837 (2016)
Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K.E., Naehrig, M., Wernsing, J.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: Proceedings 33rd International Conference on Machine Learning, pp. 201–210 (2016)
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge, MA (2016). https://www.deeplearningbook.org
Han, K., Hong, S., Cheon, J.H., Park, D.: Logistic regression on homomorphic encrypted data at scale. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pp. 9466–9471 (2019)
Hartmann, V., Modi, K., Pujol, J.M., West, R.: Privacy-preserving classification with secret vector machines. In: CIKM, pp. 475–484 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hesamifard, E., Takabi, H., Ghasemi, M.: Deep neural networks classification over encrypted data. In: Proceedings 9th ACM Conference on Data and Application Security Privacy, pp. 97–108 (2019)
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261–2269 (2017)
Juvekar, C., Vaikuntanathan, V., Chandrakasan, A.: GAZELLE: a low latency framework for secure neural network inference. In: Proceedings of the 27th USENIX Security Symposium, pp. 1651–1669 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations (2015)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via MiniONN transformations. In: Proceedings of the 24th ACM SIGSAC Conference on Computer and Communication Security, pp. 619–631 (2017)
Luo, M.R., Cui, G., Li, C.: Uniform colour spaces based on ciecam02 colour appearance model. Color Res. Appl. 31(4), 320–330 (2006)
Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5188–5196 (2015)
Mishra, P., Lehmkuhl, R., Srinivasan, A., Zheng, W., Popa, R.A.: DELPHI: a cryptographic inference service for neural networks. In: Proceedings of 29th USENIX Security Symposium, pp. 2505–2522 (2020)
Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: Proceedings of 38th IEEE Symposium on Security Privacy, pp. 19–38 (2017)
Naehrig, M., Lauter, K.E., Vaikuntanathan, V.: Can homomorphic encryption be practical? In: Proceedings of the 3rd ACM Cloud Computing Security Workshop, pp. 113–124 (2011)
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: Proceedings of the Workshop Deep Learning Unsupervised Feature Learning Neural Information Processing System (2011)
Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Proceedings of the 17th Annual International Conference on Theory Application Cryptographic Techniques, pp. 223–238 (1999)
Popa, R.A., Redfield, C.M.S., Zeldovich, N., Balakrishnan, H.: CryptDB: protecting confidentiality with encrypted query processing. In: Proceedings of the 23rd ACM Symposium on Operating System Principles, pp. 85–100 (2011)
Rathee, D., et al.: CrypTFlow2: practical 2-party secure inference. In: Proceedings of the 27th ACM SIGSAC Conference on Computer and Communications Security, pp. 325–342 (2020)
Ryffel, T., Pointcheval, D., Bach, F., Dufour-Sans, E., Gay, R.: Partially encrypted deep learning using functional encryption. In: Proceedings of the 33rd Annual Conference on Neural Information Processing System, pp. 4519–4530 (2019)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128(2), 336–359 (2020)
Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.A.: Striving for simplicity: the all convolutional net. In: Proceedings of the Workshop 3rd International Conference on Learning Representations (2015)
Tsikhanovich, M., Magdon-Ismail, M., Ishaq, M., Zikas, V.: PD-ML-Lite: private distributed machine learning from lightweight cryptography. In: Proceedings of the 22nd Information Security Conference, vol. 11723, pp. 149–167. Springer (2019). https://doi.org/10.1007/978-3-030-30215-3_8
Wagh, S., Gupta, D., Chandran, N.: Securenn: Efficient and private neural network training. IACR Cryptology ePrint Archive 2018, 442 (2018)
Wagh, S., Tople, S., Benhamouda, F., Kushilevitz, E., Mittal, P., Rabin, T.: FALCON: honest-majority maliciously secure framework for private deep learning. Proc. Priv. Enhanc. Technol. 2021(1), 188–208 (2021)
Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., Lipson, H.: Understanding neural networks through deep visualization. arXiv:1506.06579 (2015)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Proceedings of the European Conference on Computer Vision, pp. 818–833 (2014)
Zhang, Q., Wang, C., Wu, H., Xin, C., Phuong, T.V.: GELU-NET: a globally encrypted, locally unencrypted deep neural network for privacy-preserved learning. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 3933–3939 (2018)
Acknowledgements
This work is supported by National Natural Science Foundation of China (No. 61972309 and No. 62272369) and the Key Technology Innovation Project of Hangzhou (2022AIZD0132).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Adaptability of \(\left( \alpha , \beta \right) \)-Recoverable
A Adaptability of \(\left( \alpha , \beta \right) \)-Recoverable
The \(\left( \alpha , \beta \right) \)-recoverable attempts to bridge the pixel recoverable and image privacy. To feel the recovery rate of different leakage intuitively, we generate different noisy images and compute the recovery rate i.e., \(\alpha \) under different \(\beta \) settings (we adapt CAM02UCS [22] as color space), some results are shown in Fig. 6. After compare the different \(\beta \), we empirically choose \(\beta =1/3\) as a suitable threshold, because it makes the change of \(\alpha \) perceptually uniform.
Note that both \(\alpha \) and \(\beta \) are preset by users based on their privacy requirement. After many empirical estimates, we give recommend values \(\alpha \!=\!0.5,\beta \!=\!1/3\), and we adapt them in HeHe experiments.
One drawback of \(\left( \alpha , \beta \right) \)-recoverable is it could not measure the similarity for some posteriori transformations e.g., rotation and translation. But these only happen when the adversary gets the outsourced images used by users. Fortunately, it cannot happen in HeHe, because the images are encrypted by users. Thus, under the application scenario of HeHe, \(\left( \alpha , \beta \right) \)-recoverable can measure the recovery rate of the image soundly.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, L., Li, H., Yu, S., Ma, X., Peng, Y., Cui, J. (2022). HeHe: Balancing the Privacy and Efficiency in Training CNNs over the Semi-honest Cloud. In: Susilo, W., Chen, X., Guo, F., Zhang, Y., Intan, R. (eds) Information Security. ISC 2022. Lecture Notes in Computer Science, vol 13640. Springer, Cham. https://doi.org/10.1007/978-3-031-22390-7_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-22390-7_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22389-1
Online ISBN: 978-3-031-22390-7
eBook Packages: Computer ScienceComputer Science (R0)