skip to main content
10.1145/3472883.3486998acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

Citadel: Protecting Data Privacy and Model Confidentiality for Collaborative Learning

Published: 01 November 2021 Publication History

Abstract

Many organizations own data but have limited machine learning expertise (data owners). On the other hand, organizations that have expertise need data from diverse sources to train truly generalizable models (model owners). With the advancement of machine learning (ML) and its growing awareness, the data owners would like to pool their data and collaborate with model owners, such that both entities can benefit from the obtained models. In such a collaboration, the data owners want to protect the privacy of its training data, while the model owners desire the confidentiality of the model and the training method that may contain intellectual properties. Existing private ML solutions, such as federated learning and split learning, cannot simultaneously meet the privacy requirements of both data and model owners.
We present Citadel, a scalable collaborative ML system that protects both data and model privacy in untrusted infrastructures equipped with Intel SGX. Citadel performs distributed training across multiple training enclaves running on behalf of data owners and an aggregator enclave on behalf of the model owner. Citadel establishes a strong information barrier between these enclaves by zero-sum masking and hierarchical aggregation to prevent data/model leakage during collaborative training. Compared with existing SGX-protected systems, Citadel achieves better scalability and stronger privacy guarantees for collaborative ML. Cloud deployment with various ML models shows that Citadel scales to a large number of enclaves with less than 1.73X slowdown.

Supplementary Material

VTT File (Day3_Session11-Order3.vtt)
MP4 File (Day3_Session11-Order3.mp4)
Presentation video

References

[1]
2016. AMD Memory Encryption. https://developer.amd.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf.
[2]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In USENIX OSDI. 265--283.
[3]
Adil Ahmad, Juhee Kim, Jaebaek Seo, Insik Shin, Pedro Fonseca, and Byoungyoung Lee. 2021. Chancel: efficient multi-client isolation under adversarial programs. In NDSS.
[4]
Alham Fikri Aji and Kenneth Heafield. 2017. Sparse communication for distributed gradient descent. arXiv preprint arXiv:1704.05021 (2017).
[5]
altexsoft 2020. How Machine Learning Systems Help Reveal Scams in Fintech, Healthcare, and eCommerce. http://bit.ly/2K58Nli.
[6]
Amazon 2020. Amazon SageMaker. https://aws.amazon.com/sagemaker/.
[7]
Arm 2020. Arm TrustZone. https://developer.arm.com/ip-products/security-ip/trustzone.
[8]
Sergei Arnautov, Bohdan Trach, Franz Gregor, Thomas Knauth, Andre Martin, Christian Priebe, Joshua Lind, Divya Muthukumaran, Dan O'keeffe, Mark L Stillwell, et al. 2016. SCONE: Secure linux containers with intel SGX. In USENIX OSDI. 689--703.
[9]
AWS 2020. AWS Nitro System. https://aws.amazon.com/ec2/nitro/.
[10]
azure 2020. DCsv2-series. https://docs.microsoft.com/en-us/azure/virtual-machines/dcv2-series.
[11]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In PMLR AISTATS. 2938--2948.
[12]
Andrew Baumann, Marcus Peinado, and Galen Hunt. 2015. Shielding applications from an untrusted cloud with haven. ACM TOCS 33, 3 (2015), 1--26.
[13]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In ACM CCS. 1175--1191.
[14]
California State Legislature 2018. California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa.
[15]
Somnath Chakrabarti, Matthew Hoekstra, Dmitrii Kuvaiskii, and Mona Vij. 2019. Scaling Intel® Software Guard Extensions Applications with Intel® SGX Card. In HASP. 1--9.
[16]
Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, and Qiang Yang. 2019. SecureBoost: A Lossless Federated Learning Framework. arXiv preprint arXiv:1901.08755 (2019).
[17]
Citadel 2021. Citadel codebase. https://github.com/marcoszh/citadel-project.
[18]
docker 2020. Docker. https://www.docker.com/.
[19]
Wenliang Du, Yunghsiang S Han, and Shigang Chen. 2004. Privacy-preserving multivariate statistical analysis: Linear regression and classification. In SDM. SIAM, 222--233.
[20]
EP 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). https://eur-lex.europa.eu/eli/reg/2016/679/oj.
[21]
EU 2020. What are the GDPR Fines? https://gdpr.eu/fines/.
[22]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In ACM CCS. 1322--1333.
[23]
Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
[24]
Google 2020. Google Prediction API. https://cloud.google.com/ai-platform/training.
[25]
Franz Gregor, Wojciech Ozga, Sébastien Vaucher, Rafael Pires, Do Le Quoc, Sergei Arnautov, André Martin, Valerio Schiavoni, Pascal Felber, and Christof Fetzer. 2020. Trust Management as a Service: Enabling Trusted Execution in the Face of Byzantine Stakeholders. arXiv preprint arXiv:2003.14099 (2020).
[26]
Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications 116 (2018), 1--8.
[27]
Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. 2019. Local SGD with periodic averaging: Tighter analysis and adaptive synchronization. In NeurIPS.
[28]
Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil Devanur, Greg Ganger, and Phil Gibbons. 2018. Pipedream: Fast and efficient pipeline parallel dnn training. arXiv preprint arXiv:1806.03377 (2018).
[29]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: information leakage from collaborative deep learning. In ACM CCS. 603--618.
[30]
Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B Gibbons, Garth A Gibson, Greg Ganger, and Eric P Xing. 2013. More effective distributed ml via a stale synchronous parallel parameter server. In NeurIPS.
[31]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735--1780.
[32]
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. 2019. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In NeurIPS. 103--112.
[33]
Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. 2018. Chiron: Privacy-preserving machine learning as a service. arXiv preprint arXiv:1803.05961 (2018).
[34]
Tyler Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, and Emmett Witchel. 2018. Ryoan: A distributed sandbox for untrusted computation on secret data. ACM TOCS 35, 4 (2018), 1--32.
[35]
Nick Hynes, Raymond Cheng, and Dawn Song. 2018. Efficient deep learning on multi-source private data. arXiv preprint arXiv:1807.06689 (2018).
[36]
IBMWH 2020. IBM Watson Health: Diagnostic Imaging Solutions. https://www.ibm.com/watson-health/solutions/diagnostic-imaging.
[37]
Intel 2020. Intel SGX. https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions.html.
[38]
Qi Jia, Linke Guo, Zhanpeng Jin, and Yuguang Fang. 2018. Preserving model privacy for machine learning in distributed systems. IEEE TPDS 29, 8 (2018), 1808--1822.
[39]
k8s 2020. Kubernetes. https://kubernetes.io/.
[40]
Kaggle 2020. Diabetic Retinopathy. https://www.kaggle.com/sovitrath/diabetic-retinopathy-224x224-gaussian-filtered.
[41]
Kaggle 2020. SMS Spam Collection. https://www.kaggle.com/uciml/sms-spam-collection-dataset.
[42]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. 2019. Advances and Open Problems in Federated Learning. arXiv preprint arXiv:1912.04977 (2019).
[43]
Taehoon Kim, Joongun Park, Jaewook Woo, Seungheun Jeon, and Jaehyuk Huh. 2019. Shieldstore: Shielded in-memory key-value storage with SGX. In EuroSys. 1--15.
[44]
Robert Krahn, Bohdan Trach, Anjo Vahldiek-Oberwagner, Thomas Knauth, Pramod Bhatotia, and Christof Fetzer. 2018. Pesos: Policy enhanced secure object store. In EuroSys. 1--17.
[45]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In NeurIPS.
[46]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2017. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84--90.
[47]
Roland Kunkel, Do Le Quoc, Franz Gregor, Sergei Arnautov, Pramod Bhatotia, and Christof Fetzer. 2019. TensorSCONE: A secure TensorFlow framework using Intel SGX. arXiv preprint arXiv:1902.04413 (2019).
[48]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.
[49]
Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu. 2015. Asynchronous parallel stochastic gradient for nonconvex optimization. In NeurIPS.
[50]
Tao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. 2018. Don't Use Large Mini-Batches, Use Local SGD. arXiv preprint arXiv:1808.07217 (2018).
[51]
Changchang Liu, Supriyo Chakraborty, and Dinesh Verma. 2019. Secure Model Fusion for Distributed Learning Using Partial Homomorphic Encryption. In Policy-Based Autonomic Data Governance. Springer, 154--179.
[52]
Yang Liu, Tianjian Chen, and Qiang Yang. 2018. Secure Federated Transfer Learning. arXiv preprint arXiv:1812.03337 (2018).
[53]
Kalikinkar Mandal and Guang Gong. 2019. PrivFL: Practical privacy-preserving federated regressions on high-dimensional data over mobile networks. In ACM CCS Workshop. 57--68.
[54]
H Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, and Peter Kairouz. 2018. A general approach to adding differential privacy to iterative training procedures. arXiv preprint arXiv:1812.06210 (2018).
[55]
H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated Learning of Deep Networks using Model Averaging. ArXiv abs/1602.05629 (2016).
[56]
H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963 (2017).
[57]
Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino, and Nicolas Kourtellis. 2021. PPFL: privacy-preserving federated learning with trusted execution environments. arXiv preprint arXiv:2104.14380 (2021).
[58]
Payman Mohassel and Peter Rindal. 2018. ABY 3: a mixed protocol framework for machine learning. In ACM CCS. 35--52.
[59]
Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalable privacy-preserving machine learning. In IEEE SP. 19--38.
[60]
mongodb 2020. mongoDB. https://www.mongodb.com/.
[61]
nexusguard 2021. NexusGuard. http://www.nexusguard.consulting/.
[62]
NPCSC 2017. Cybersecurity Law of the People's Republic of China. http://www.lawinfochina.com/display.aspx?id=22826&lib=law.
[63]
Olga Ohrimenko, Felix Schuster, Cédric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. 2016. Oblivious multi-party machine learning on trusted processors. In USENIX Security. 619--636.
[64]
Oleksii Oleksenko, Bohdan Trach, Robert Krahn, Mark Silberstein, and Christof Fetzer. 2018. Varys: Protecting SGX enclaves from practical side-channel attacks. In USENIX ATC. 227--240.
[65]
Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2019. Knockoff nets: Stealing functionality of black-box models. In IEEE CVPR. 4954--4963.
[66]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM ASIACCS. 506--519.
[67]
Bryan Parno, Jacob R Lorch, John R Douceur, James Mickens, and Jonathan M McCune. 2011. Memoir: Practical state continuity for protected modules. In IEEE SP. 379--394.
[68]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In NeuriPS. 8026--8037.
[69]
Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2018), 1333--1345.
[70]
PingAn 2020. Ping An: Security Technology Reduces Data Silos. https://www.intel.com/content/www/us/en/customer-spotlight/stories/ping-an-sgx-customer-story.html.
[71]
python 2020. CFFI. https://cffi.readthedocs.io/en/latest/.
[72]
python 2020. Python GIL. https://realpython.com/python-gil/.
[73]
Do Le Quoc, Franz Gregor, Sergei Arnautov, Roland Kunkel, Pramod Bhatotia, and Christof Fetzer. 2020. secureTF: A secure tensorflow framework. In USENIX Middleware. 44--59.
[74]
Hany Ragab, Alyssa Milburn, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida. 2021. Crosstalk: Speculative data leaks across cores are real. In S&P. IEEE.
[75]
Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016).
[76]
Youren Shen, Hongliang Tian, Yu Chen, Kang Chen, Runji Wang, Yi Xu, Yubin Xia, and Shoumeng Yan. 2020. Occlum: Secure and efficient multitasking inside a single enclave of intel sgx. In ASPLOS.
[77]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In ACM CCS. 1310--1321.
[78]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In IEEE SP. IEEE, 3--18.
[79]
Samuel Smith, Erich Elsen, and Soham De. 2020. On the Generalization Benefit of Noise in Stochastic Gradient Descent. In ICML. PMLR.
[80]
Jinhyun So, Basak Guler, and A Salman Avestimehr. 2020. Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning. arXiv preprint arXiv:2002.04156 (2020).
[81]
Florian Tramer and Dan Boneh. 2018. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. arXiv preprint arXiv:1806.03287 (2018).
[82]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In USENIX Security. 601--618.
[83]
Chia-Che Tsai, Donald E Porter, and Mona Vij. 2017. Graphene-sgx: A practical library OS for unmodified applications on SGX. In USENIX ATC). 645--658.
[84]
Jo Van Bulck, Marina Minkin, Ofir Weisse, Daniel Genkin, Baris Kasikci, Frank Piessens, Mark Silberstein, Thomas F Wenisch, Yuval Yarom, and Raoul Strackx. 2018. Foreshadow: Extracting the keys to the intel {SGX} kingdom with transient out-of-order execution. In USENIX Security. 991--1008.
[85]
Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint arXiv:1812.00564 (2018).
[86]
Stavros Volos, Kapil Vaswani, and Rodrigo Bruno. 2018. Graviton: Trusted execution environments on GPUs. In USENIX OSDI. 681--696.
[87]
Robert Wahbe, Steven Lucco, Thomas E Anderson, and Susan L Graham. 1993. Efficient software-based fault isolation. In SOSP.
[88]
Jianyu Wang and Gauri Joshi. 2018. Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD. arXiv preprint arXiv:1810.08313 (2018).
[89]
Nico Weichbrodt, Pierre-Louis Aublin, and Rüdiger Kapitza. 2018. sgxperf: A performance analysis tool for Intel SGX enclaves. In USENIX Middleware. 201--213.
[90]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM TIST 10, 2 (2019), 12.
[91]
Minchen Yu, Zhifeng Jiang, Hok Chun Ng, Wei Wang, Ruichuan Chen, and Bo Li. 2021. Gillis: Serving Large Neural Networks in Serverless Functions with Automatic Model Partitioning. (2021).
[92]
Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012).
[93]
Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In USENIX ATC.
[94]
Chengliang Zhang, Huangshi Tian, Wei Wang, and Feng Yan. 2018. Stay Fresh: Speculative Synchronization for Fast Distributed Machine Learning. In ICDCS. IEEE.
[95]
Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. 2019. Why gradient clipping accelerates training: A theoretical justification for adaptivity. arXiv preprint arXiv:1905.11881 (2019).
[96]
Yanjun Zhang, Guangdong Bai, Xue Li, Caitlin Curtis, Chen Chen, and Ryan KL Ko. 2020. PrivColl: Practical Privacy-Preserving Collaborative Machine Learning. In ESORICS.

Cited By

View all
  • (2025)Confidential Computing Across Edge‐To‐Cloud for Machine Learning: A Survey StudySoftware: Practice and Experience10.1002/spe.3398Online publication date: 3-Jan-2025
  • (2024)Secure-by-Design Real-Time Internet of Medical Things Architecture: e-Health Population Monitoring (RTPM)Telecom10.3390/telecom50300315:3(609-631)Online publication date: 10-Jul-2024
  • (2024)Collaborative Distributed Machine LearningACM Computing Surveys10.1145/370480757:4(1-36)Online publication date: 20-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SoCC '21: Proceedings of the ACM Symposium on Cloud Computing
November 2021
685 pages
ISBN:9781450386388
DOI:10.1145/3472883
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 November 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SoCC '21
Sponsor:
SoCC '21: ACM Symposium on Cloud Computing
November 1 - 4, 2021
WA, Seattle, USA

Acceptance Rates

Overall Acceptance Rate 169 of 722 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)131
  • Downloads (Last 6 weeks)14
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Confidential Computing Across Edge‐To‐Cloud for Machine Learning: A Survey StudySoftware: Practice and Experience10.1002/spe.3398Online publication date: 3-Jan-2025
  • (2024)Secure-by-Design Real-Time Internet of Medical Things Architecture: e-Health Population Monitoring (RTPM)Telecom10.3390/telecom50300315:3(609-631)Online publication date: 10-Jul-2024
  • (2024)Collaborative Distributed Machine LearningACM Computing Surveys10.1145/370480757:4(1-36)Online publication date: 20-Nov-2024
  • (2024)Machine Learning with Confidential Computing: A Systematization of KnowledgeACM Computing Surveys10.1145/367000756:11(1-40)Online publication date: 29-Jun-2024
  • (2024)Tempo: Confidentiality Preservation in Cloud-Based Neural Network Training2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650731(1-10)Online publication date: 30-Jun-2024
  • (2024)Duet: Combining a Trustworthy Controller with a Confidential Computing Environment2024 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)10.1109/EuroSPW61312.2024.00028(436-442)Online publication date: 8-Jul-2024
  • (2024)PraaS: Verifiable Proofs of Property as-a-Service with Intel SGX2024 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)10.1109/EuroSPW61312.2024.00027(199-207)Online publication date: 8-Jul-2024
  • (2024)Federated learning for digital healthcare: concepts, applications, frameworks, and challengesComputing10.1007/s00607-024-01317-7106:9(3113-3150)Online publication date: 10-Jul-2024
  • (2023)Enabling Secure and Efficient Data Analytics Pipeline Evolution with Trusted Execution EnvironmentProceedings of the VLDB Endowment10.14778/3603581.360358916:10(2485-2498)Online publication date: 1-Jun-2023
  • (2023)Secure MLaaS with Temper: Trusted and Efficient Model Partitioning and Enclave ReuseProceedings of the 39th Annual Computer Security Applications Conference10.1145/3627106.3627145(621-635)Online publication date: 4-Dec-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media