skip to main content
10.1145/3623652.3623665acmotherconferencesArticle/Chapter ViewAbstractPublication PageshaspConference Proceedingsconference-collections
research-article

DINAR: Enabling Distribution Agnostic Noise Injection in Machine Learning Hardware

Authors Info & Claims
Published:29 October 2023Publication History

ABSTRACT

Machine learning (ML) has seen a major rise in popularity on edge devices in recent years, ranging from IoT devices to self-driving cars. Security in a critical consideration on these platforms. State-of-the-art security-centric ML algorithms (e.g., differentially private ML, adversarial robustness) require noise sampled from Laplace or Gaussian distributions. Edge accelerators lack CPUs [15, 25, 36, 50] to add such noise. Existing hardware approaches to generate noise on-the-fly incur high overheads and leak side-channel information that can undermine security [34, 47]. To remedy this, we propose DINAR,1 lightweight hardware that enables noise addition from arbitrary distributions. For differentially private ML, DINAR enables noise addition while incurring 23 × lower area and 40 × lower energy compared to producing noise directly on-chip.

References

  1. Martin Abadi 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC conference on computer and communications security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. John M Abowd. 2018. The US Census Bureau adopts differential privacy. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2867–2867.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Muhammad Aitsam. 2022. Differential privacy made easy. In Proceedings of the International Conference on Emerging Trends in Electrical, Control, and Telecommunication Engineering (ETECTE). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  4. Abdulmalik Alwarafy 2021. A Survey on Security and Privacy Issues in Edge-Computing-Assisted Internet of Things. IEEE Internet of Things Journal 8, 6 (2021).Google ScholarGoogle ScholarCross RefCross Ref
  5. Apple. 2017. Learning with Privacy at Scale. https://docs-assets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf.Google ScholarGoogle Scholar
  6. Hadi Asghari-Moghaddam 2016. Near-DRAM acceleration with single-ISA heterogeneous processing in standard memory modules. IEEE Micro 36, 1 (2016).Google ScholarGoogle Scholar
  7. Anish Athalye and Nicholas Carlini. 2018. On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses. https://arxiv.org/abs/1804.03286Google ScholarGoogle Scholar
  8. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. https://arxiv.org/abs/1802.00420Google ScholarGoogle Scholar
  9. Mohammed Bakiri 2018. A Hardware and Secure Pseudorandom Generator for Constrained Devices. IEEE Transactions on Industrial Informatics 14, 8 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  10. Samah Baraheem and Zhongmei Yao. 2022. A Survey on Differential Privacy with Machine Learning and Future Outlook. arXiv preprint arXiv:2211.10708 (2022).Google ScholarGoogle Scholar
  11. Sathwika Bavikadi 2022. A Survey on Machine Learning Accelerators and Evolutionary Hardware Platforms. IEEE Design and Test 39, 3 (2022).Google ScholarGoogle Scholar
  12. Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. 2021. Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. In Proceedings of the Design, Automation and Test in Europe Conference (DATE).Google ScholarGoogle ScholarCross RefCross Ref
  13. George E. P. Box and Mervin E. Muller. 1958. A Note on the Generation of Random Normal Deviates. Annals of Mathematical Statistics 29 (1958), 610–611.Google ScholarGoogle ScholarCross RefCross Ref
  14. Anirban Chakraborty 2018. Adversarial Attacks and Defences: A Survey. https://arxiv.org/abs/1810.00069Google ScholarGoogle Scholar
  15. Yu-Hsin Chen 2016. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE journal of solid-state circuits 52, 1 (2016), 127–138.Google ScholarGoogle Scholar
  16. Woo-Seok Choi 2018. Guaranteeing Local Differential Privacy on Ultra-Low-Power Systems. In Proceedings of the International Symposium on Computer Architecture (ISCA).Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Aritra Dhar, Supraja Sridhara, Shweta Shinde, Srdjan Capkun, and Renzo Andri. 2022. Empowering Data Centers for Next Generation Trusted Computing. arXiv preprint arXiv:2211.00306 (2022).Google ScholarGoogle Scholar
  18. Cynthia Dwork. 2006. Differential privacy. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP). Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hassan Edrees 2009. Hardware-Optimized Ziggurat Algorithm for High-Speed Gaussian Random Number Generators. In ERSA. 254–260.Google ScholarGoogle Scholar
  20. Panagiotis Eustratiadis, Henry Gouk, Da Li, and Timothy Hospedales. 2021. Weight-covariance alignment for adversarially robust neural networks. In Proceedings of the 38th International Conference on Machine Learning. PMLR.Google ScholarGoogle Scholar
  21. Amin Farmahini-Farahani 2015. NDA: Near-DRAM acceleration architecture leveraging commodity DRAM devices and standard memory modules. In Proceedings of the 21st IEEE International Symposium on High Performance Computer Architecture (HPCA).Google ScholarGoogle ScholarCross RefCross Ref
  22. Matthew Fredrikson 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In 23rd USENIX Security Symposium.Google ScholarGoogle Scholar
  23. Yonggan Fu 2021. 2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency. In Proceedings of the 54th annual IEEE/ACM International Symposium on Microarchitecture (MICRO).Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Quan Geng and Pramod Viswanath. 2015. Optimal noise adding mechanisms for approximate differential privacy. IEEE Transactions on Information Theory 62, 2 (2015), 952–969.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Graham Gobieski 2021. Snafu: An Ultra-Low-Power, Energy-Minimal CGRA-Generation Framework and Architecture. In Proceedings of the 48th Annual ACM/IEEE International Symposium on Computer Architecture (ISCA).Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. https://doi.org/10.48550/ARXIV.1412.6572Google ScholarGoogle ScholarCross RefCross Ref
  27. Roberto Gutierrez, Vicente Torres, and Javier Valls. 2012. Hardware Architecture of a Gaussian Noise Generator Based on the Inversion Method. IEEE Transactions on Circuits and Systems II: Express Briefs 59, 8 (2012).Google ScholarGoogle ScholarCross RefCross Ref
  28. Zhezhi He, Adnan Siraj Rakin, and Deliang Fan. 2019. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 588–597.Google ScholarGoogle ScholarCross RefCross Ref
  29. Weizhe Hua, Muhammad Umar, Zhiru Zhang, and G. Edward Suh. 2020. GuardNN: Secure DNN accelerator for privacy-preserving deep learning. arXiv preprint arXiv:2008.11632 (2020).Google ScholarGoogle Scholar
  30. Weizhe Hua, Zhiru Zhang, and G. Edward Suh. 2018. Reverse Engineering Convolutional Neural Networks through Side-Channel Information Leaks. In Proceedings of the 55th Annual Design Automation Conference.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Md Shohidul Islam, Behnam Omidi, Ihsen Alouani, and Khaled N. Khasawneh. 2023. VPP: Privacy Preserving Machine Learning via Undervolting. In Proceedings of the IEEE International Symposium on Hardware Oriented Security and Trust (HOST).Google ScholarGoogle ScholarCross RefCross Ref
  32. Shohidul Islam, Ihsen Alouani, and Khaled N. Khasawneh. 2021. Lower Voltage for Higher Security: Using Voltage Overscaling to Secure Deep Neural Networks. In Proceedings of the IEEE/ACM International Conference On Computer Aided Design (ICCAD).Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Ahmadreza Jeddi 2020. Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  34. Jiankai Jin 2022. Are we there yet? timing and floating-point attacks on differential privacy systems. In Proceedings of the IEEE Symposium on Security and Privacy (SP). 473–488.Google ScholarGoogle ScholarCross RefCross Ref
  35. Hoki Kim. 2020. Torchattacks: A pytorch repository for adversarial attacks. https://arxiv.org/abs/2010.01950Google ScholarGoogle Scholar
  36. Hyoukjun Kwon, Ananda Samajdar, and Tushar Krishna. 2018. MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Reconfigurable Interconnects. In Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Mathias Lecuyer 2019. Certified robustness to adversarial examples with differential privacy. In IEEE Symposium on Security and Privacy (SP). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  38. Dong-U Lee, Ray C.C. Cheung, John D. Villasenor, and Wayne Luk. 2006. Inversion-based hardware gaussian random number generator: A case study of function evaluation via hierarchical segmentation. In Proceedings of the IEEE International Conference on Field Programmable Technology.Google ScholarGoogle ScholarCross RefCross Ref
  39. D-U Lee, John D Villasenor, Wayne Luk, and Philip Heng Wai Leong. 2006. A hardware Gaussian noise generator using the Box-Muller method and its error analysis. IEEE transactions on computers 55, 6 (2006), 659–671.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Jaewoo Lee and Chris Clifton. 2011. How much is enough? choosing ε for differential privacy. In Proceedings of the 14th International Conference on Information Security (ISC).Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Aleksander Madry 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. https://arxiv.org/abs/1706.06083Google ScholarGoogle Scholar
  42. Aleksander Madry 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  43. Saikat Majumdar, Mohammad Hossein Samavatian, Kristin Barber, and Radu Teodorescu. 2021. Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks. In Proceedings of the IEEE International Symposium on Hardware Oriented Security and Trust (HOST).Google ScholarGoogle ScholarCross RefCross Ref
  44. Jamshaid Sarwar Malik and Ahmed Hemani. 2016. Gaussian random number generation: A survey on hardware architectures. ACM Computing Surveys (CSUR) 49, 3 (2016).Google ScholarGoogle Scholar
  45. Fatemehsadat Mireshghallah 2020. Shredder: Learning Noise Distributions to Protect Inference Privacy. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Fatemehsadat Mireshghallah 2021. Not all features are equal: Discovering essential features for preserving prediction privacy. In Proceedings of the Web Conference 2021.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Ilya Mironov. 2012. On Significance of the Least Significant Bits for Differential Privacy. In Proceedings of the 2012 ACM Conference on Computer and Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Chaya Nayak. 2020. New privacy-protected Facebook data for independent research on social media’s impact on democracy. https://research.facebook.com/blog/2020/2/new-privacy-protected-facebook-data-for-independent-research-on-social-medias-impact-on-democracy/.Google ScholarGoogle Scholar
  49. Nicolas Papernot 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582–597.Google ScholarGoogle ScholarCross RefCross Ref
  50. Beomsik Park 2022. DiVa: An Accelerator for Differentially Private Machine Learning. In Proceedings of the 55th IEEE/ACM International Symposium on Microarchitecture (MICRO).Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Adam Paszke 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdfGoogle ScholarGoogle Scholar
  52. Deboleena Roy, Indranil Chakraborty, Timur Ibrayev, and Kaushik Roy. 2021. On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks. In 2021 58th ACM/IEEE Design Automation Conference (DAC).Google ScholarGoogle Scholar
  53. Deboleena Roy, Chun Tao, Indranil Chakraborty, and Kaushik Roy. 2021. On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars. https://arxiv.org/abs/2109.09060Google ScholarGoogle Scholar
  54. Michael Schulte and Earl Swartzlander. 1994. Hardware designs for exactly rounded elementary functions. IEEE Trans. Comput. 43, 8 (1994).Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy Risks of Securing Machine Learning Models against Adversarial Examples. In Proceedings of the Conference on Computer and Communications SecuritySIGSAC.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Tom Titcombe 2021. Practical defences against model inversion attacks for split neural networks. In Proceedings of the Workshop on Distributed and Private Machine Learning (DPML) co-located with ICLR.Google ScholarGoogle Scholar
  57. Xingbin Wang 2019. NPUFort: a secure architecture of DNN accelerator against model inversion attack. In Proceedings of the 16th ACM International Conference on Computing Frontiers.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Lingxiao Wei 2018. I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators. In Proceedings of the 34th Annual Computer Security Applications Conference.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Yannan N. Wu, Joel S. Emer, and Vivienne Sze. 2019. Accelergy: An Architecture-Level Energy Estimation Methodology for Accelerator Designs. In IEEE/ACM International Conference On Computer Aided Design (ICCAD).Google ScholarGoogle Scholar
  60. Dingqing Yang, Prashant J. Nair, and Mieszko Lis. 2023. HuffDuff: Stealing Pruned DNNs from Sparse Accelerators. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Hao Yang 2022. Rethinking feature uncertainty in stochastic neural networks for adversarial robustness. arXiv preprint arXiv:2201.00148 (2022).Google ScholarGoogle Scholar
  62. Ashkan Yousefpour 2021. Opacus: User-Friendly Differential Privacy Library in PyTorch. arXiv preprint arXiv:2109.12298 (2021).Google ScholarGoogle Scholar
  63. Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2017. Adversarial Examples: Attacks and Defenses for Deep Learning. https://doi.org/10.48550/ARXIV.1712.07107Google ScholarGoogle ScholarCross RefCross Ref
  64. Guanglie Zhang 2005. Ziggurat-based hardware Gaussian random number generator. In Proceedings of the IEEE International Conference on Field Programmable Logic and Applications.Google ScholarGoogle Scholar
  65. Huan Zhang 2019. The Limitations of Adversarial Training and the Blind-Spot Attack. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar

Index Terms

  1. DINAR: Enabling Distribution Agnostic Noise Injection in Machine Learning Hardware

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          HASP '23: Proceedings of the 12th International Workshop on Hardware and Architectural Support for Security and Privacy
          October 2023
          106 pages
          ISBN:9798400716232
          DOI:10.1145/3623652

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 29 October 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate9of13submissions,69%
        • Article Metrics

          • Downloads (Last 12 months)81
          • Downloads (Last 6 weeks)19

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format