skip to main content
10.1145/3319535.3363279acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
poster

Poster: Towards Characterizing and Limiting Information Exposure in DNN Layers

Published: 06 November 2019 Publication History

Abstract

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models. Based on the concept of generalization error, we propose a framework to measure the amount of sensitive information memorized in each layer of a DNN. Our results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. We find that the same DNN architecture trained with different datasets has similar exposure per layer. We evaluate an architecture to protect the most sensitive layers within an on-device Trusted Execution Environment (TEE) against potential white-box membership inference attacks without the significant computational overhead.

References

[1]
Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Hani Jamjoom, Ankita Lamba, Dimitrios Pendarakis, and Ian Molloy. 2018. YerbaBuena: Securing Deep Learning Inference Data via Enclave-based Ternary Model Partitioning. arXiv preprint arXiv:1807.00969 (2018).
[2]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. IEEE.
[3]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks. arXiv preprint arXiv:1812.00910 (2018).
[4]
Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. 2010. Learnability, stability and uniform convergence. Journal of Machine Learning Research, Vol. 11, Oct (2010), 2635--2670.
[5]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3--18.
[6]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[7]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF). IEEE, 268--282.
[8]
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In Proceedings of the International Conference on Learning Representations (ICLR) . France.

Cited By

View all
  • (2021)Security and Privacy Challenges of Deep LearningResearch Anthology on Privatizing and Securing Data10.4018/978-1-7998-8954-0.ch059(1258-1280)Online publication date: 2021
  • (2020)Security and Privacy Challenges of Deep LearningDeep Learning Strategies for Security Enhancement in Wireless Sensor Networks10.4018/978-1-7998-5068-7.ch003(42-64)Online publication date: 2020
  • (2020)DarkneTZProceedings of the 18th International Conference on Mobile Systems, Applications, and Services10.1145/3386901.3388946(161-174)Online publication date: 15-Jun-2020

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
November 2019
2755 pages
ISBN:9781450367479
DOI:10.1145/3319535
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 November 2019

Check for updates

Author Tags

  1. deep learning
  2. privacy
  3. sensitive information exposure
  4. training data
  5. trusted execution environment

Qualifiers

  • Poster

Funding Sources

  • EPSRC: DADA
  • EPSRC: Databox
  • EPSRC: HDI

Conference

CCS '19
Sponsor:

Acceptance Rates

CCS '19 Paper Acceptance Rate 149 of 934 submissions, 16%;
Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Security and Privacy Challenges of Deep LearningResearch Anthology on Privatizing and Securing Data10.4018/978-1-7998-8954-0.ch059(1258-1280)Online publication date: 2021
  • (2020)Security and Privacy Challenges of Deep LearningDeep Learning Strategies for Security Enhancement in Wireless Sensor Networks10.4018/978-1-7998-5068-7.ch003(42-64)Online publication date: 2020
  • (2020)DarkneTZProceedings of the 18th International Conference on Mobile Systems, Applications, and Services10.1145/3386901.3388946(161-174)Online publication date: 15-Jun-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media