ABSTRACT
Machine learning (ML) has seen a major rise in popularity on edge devices in recent years, ranging from IoT devices to self-driving cars. Security in a critical consideration on these platforms. State-of-the-art security-centric ML algorithms (e.g., differentially private ML, adversarial robustness) require noise sampled from Laplace or Gaussian distributions. Edge accelerators lack CPUs [15, 25, 36, 50] to add such noise. Existing hardware approaches to generate noise on-the-fly incur high overheads and leak side-channel information that can undermine security [34, 47]. To remedy this, we propose DINAR,1 lightweight hardware that enables noise addition from arbitrary distributions. For differentially private ML, DINAR enables noise addition while incurring 23 × lower area and 40 × lower energy compared to producing noise directly on-chip.
- Martin Abadi 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC conference on computer and communications security.Google ScholarDigital Library
- John M Abowd. 2018. The US Census Bureau adopts differential privacy. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2867–2867.Google ScholarDigital Library
- Muhammad Aitsam. 2022. Differential privacy made easy. In Proceedings of the International Conference on Emerging Trends in Electrical, Control, and Telecommunication Engineering (ETECTE). IEEE.Google ScholarCross Ref
- Abdulmalik Alwarafy 2021. A Survey on Security and Privacy Issues in Edge-Computing-Assisted Internet of Things. IEEE Internet of Things Journal 8, 6 (2021).Google ScholarCross Ref
- Apple. 2017. Learning with Privacy at Scale. https://docs-assets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf.Google Scholar
- Hadi Asghari-Moghaddam 2016. Near-DRAM acceleration with single-ISA heterogeneous processing in standard memory modules. IEEE Micro 36, 1 (2016).Google Scholar
- Anish Athalye and Nicholas Carlini. 2018. On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses. https://arxiv.org/abs/1804.03286Google Scholar
- Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. https://arxiv.org/abs/1802.00420Google Scholar
- Mohammed Bakiri 2018. A Hardware and Secure Pseudorandom Generator for Constrained Devices. IEEE Transactions on Industrial Informatics 14, 8 (2018).Google ScholarCross Ref
- Samah Baraheem and Zhongmei Yao. 2022. A Survey on Differential Privacy with Machine Learning and Future Outlook. arXiv preprint arXiv:2211.10708 (2022).Google Scholar
- Sathwika Bavikadi 2022. A Survey on Machine Learning Accelerators and Evolutionary Hardware Platforms. IEEE Design and Test 39, 3 (2022).Google Scholar
- Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. 2021. Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks. In Proceedings of the Design, Automation and Test in Europe Conference (DATE).Google ScholarCross Ref
- George E. P. Box and Mervin E. Muller. 1958. A Note on the Generation of Random Normal Deviates. Annals of Mathematical Statistics 29 (1958), 610–611.Google ScholarCross Ref
- Anirban Chakraborty 2018. Adversarial Attacks and Defences: A Survey. https://arxiv.org/abs/1810.00069Google Scholar
- Yu-Hsin Chen 2016. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE journal of solid-state circuits 52, 1 (2016), 127–138.Google Scholar
- Woo-Seok Choi 2018. Guaranteeing Local Differential Privacy on Ultra-Low-Power Systems. In Proceedings of the International Symposium on Computer Architecture (ISCA).Google ScholarDigital Library
- Aritra Dhar, Supraja Sridhara, Shweta Shinde, Srdjan Capkun, and Renzo Andri. 2022. Empowering Data Centers for Next Generation Trusted Computing. arXiv preprint arXiv:2211.00306 (2022).Google Scholar
- Cynthia Dwork. 2006. Differential privacy. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP). Springer.Google ScholarDigital Library
- Hassan Edrees 2009. Hardware-Optimized Ziggurat Algorithm for High-Speed Gaussian Random Number Generators. In ERSA. 254–260.Google Scholar
- Panagiotis Eustratiadis, Henry Gouk, Da Li, and Timothy Hospedales. 2021. Weight-covariance alignment for adversarially robust neural networks. In Proceedings of the 38th International Conference on Machine Learning. PMLR.Google Scholar
- Amin Farmahini-Farahani 2015. NDA: Near-DRAM acceleration architecture leveraging commodity DRAM devices and standard memory modules. In Proceedings of the 21st IEEE International Symposium on High Performance Computer Architecture (HPCA).Google ScholarCross Ref
- Matthew Fredrikson 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In 23rd USENIX Security Symposium.Google Scholar
- Yonggan Fu 2021. 2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency. In Proceedings of the 54th annual IEEE/ACM International Symposium on Microarchitecture (MICRO).Google ScholarDigital Library
- Quan Geng and Pramod Viswanath. 2015. Optimal noise adding mechanisms for approximate differential privacy. IEEE Transactions on Information Theory 62, 2 (2015), 952–969.Google ScholarDigital Library
- Graham Gobieski 2021. Snafu: An Ultra-Low-Power, Energy-Minimal CGRA-Generation Framework and Architecture. In Proceedings of the 48th Annual ACM/IEEE International Symposium on Computer Architecture (ISCA).Google ScholarDigital Library
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. https://doi.org/10.48550/ARXIV.1412.6572Google ScholarCross Ref
- Roberto Gutierrez, Vicente Torres, and Javier Valls. 2012. Hardware Architecture of a Gaussian Noise Generator Based on the Inversion Method. IEEE Transactions on Circuits and Systems II: Express Briefs 59, 8 (2012).Google ScholarCross Ref
- Zhezhi He, Adnan Siraj Rakin, and Deliang Fan. 2019. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 588–597.Google ScholarCross Ref
- Weizhe Hua, Muhammad Umar, Zhiru Zhang, and G. Edward Suh. 2020. GuardNN: Secure DNN accelerator for privacy-preserving deep learning. arXiv preprint arXiv:2008.11632 (2020).Google Scholar
- Weizhe Hua, Zhiru Zhang, and G. Edward Suh. 2018. Reverse Engineering Convolutional Neural Networks through Side-Channel Information Leaks. In Proceedings of the 55th Annual Design Automation Conference.Google ScholarDigital Library
- Md Shohidul Islam, Behnam Omidi, Ihsen Alouani, and Khaled N. Khasawneh. 2023. VPP: Privacy Preserving Machine Learning via Undervolting. In Proceedings of the IEEE International Symposium on Hardware Oriented Security and Trust (HOST).Google ScholarCross Ref
- Shohidul Islam, Ihsen Alouani, and Khaled N. Khasawneh. 2021. Lower Voltage for Higher Security: Using Voltage Overscaling to Secure Deep Neural Networks. In Proceedings of the IEEE/ACM International Conference On Computer Aided Design (ICCAD).Google ScholarDigital Library
- Ahmadreza Jeddi 2020. Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
- Jiankai Jin 2022. Are we there yet? timing and floating-point attacks on differential privacy systems. In Proceedings of the IEEE Symposium on Security and Privacy (SP). 473–488.Google ScholarCross Ref
- Hoki Kim. 2020. Torchattacks: A pytorch repository for adversarial attacks. https://arxiv.org/abs/2010.01950Google Scholar
- Hyoukjun Kwon, Ananda Samajdar, and Tushar Krishna. 2018. MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Reconfigurable Interconnects. In Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems.Google ScholarDigital Library
- Mathias Lecuyer 2019. Certified robustness to adversarial examples with differential privacy. In IEEE Symposium on Security and Privacy (SP). IEEE.Google ScholarCross Ref
- Dong-U Lee, Ray C.C. Cheung, John D. Villasenor, and Wayne Luk. 2006. Inversion-based hardware gaussian random number generator: A case study of function evaluation via hierarchical segmentation. In Proceedings of the IEEE International Conference on Field Programmable Technology.Google ScholarCross Ref
- D-U Lee, John D Villasenor, Wayne Luk, and Philip Heng Wai Leong. 2006. A hardware Gaussian noise generator using the Box-Muller method and its error analysis. IEEE transactions on computers 55, 6 (2006), 659–671.Google ScholarDigital Library
- Jaewoo Lee and Chris Clifton. 2011. How much is enough? choosing ε for differential privacy. In Proceedings of the 14th International Conference on Information Security (ISC).Google ScholarDigital Library
- Aleksander Madry 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. https://arxiv.org/abs/1706.06083Google Scholar
- Aleksander Madry 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations.Google Scholar
- Saikat Majumdar, Mohammad Hossein Samavatian, Kristin Barber, and Radu Teodorescu. 2021. Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks. In Proceedings of the IEEE International Symposium on Hardware Oriented Security and Trust (HOST).Google ScholarCross Ref
- Jamshaid Sarwar Malik and Ahmed Hemani. 2016. Gaussian random number generation: A survey on hardware architectures. ACM Computing Surveys (CSUR) 49, 3 (2016).Google Scholar
- Fatemehsadat Mireshghallah 2020. Shredder: Learning Noise Distributions to Protect Inference Privacy. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems.Google ScholarDigital Library
- Fatemehsadat Mireshghallah 2021. Not all features are equal: Discovering essential features for preserving prediction privacy. In Proceedings of the Web Conference 2021.Google ScholarDigital Library
- Ilya Mironov. 2012. On Significance of the Least Significant Bits for Differential Privacy. In Proceedings of the 2012 ACM Conference on Computer and Communications Security.Google ScholarDigital Library
- Chaya Nayak. 2020. New privacy-protected Facebook data for independent research on social media’s impact on democracy. https://research.facebook.com/blog/2020/2/new-privacy-protected-facebook-data-for-independent-research-on-social-medias-impact-on-democracy/.Google Scholar
- Nicolas Papernot 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582–597.Google ScholarCross Ref
- Beomsik Park 2022. DiVa: An Accelerator for Differentially Private Machine Learning. In Proceedings of the 55th IEEE/ACM International Symposium on Microarchitecture (MICRO).Google ScholarDigital Library
- Adam Paszke 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdfGoogle Scholar
- Deboleena Roy, Indranil Chakraborty, Timur Ibrayev, and Kaushik Roy. 2021. On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks. In 2021 58th ACM/IEEE Design Automation Conference (DAC).Google Scholar
- Deboleena Roy, Chun Tao, Indranil Chakraborty, and Kaushik Roy. 2021. On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars. https://arxiv.org/abs/2109.09060Google Scholar
- Michael Schulte and Earl Swartzlander. 1994. Hardware designs for exactly rounded elementary functions. IEEE Trans. Comput. 43, 8 (1994).Google ScholarDigital Library
- Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy Risks of Securing Machine Learning Models against Adversarial Examples. In Proceedings of the Conference on Computer and Communications SecuritySIGSAC.Google ScholarDigital Library
- Tom Titcombe 2021. Practical defences against model inversion attacks for split neural networks. In Proceedings of the Workshop on Distributed and Private Machine Learning (DPML) co-located with ICLR.Google Scholar
- Xingbin Wang 2019. NPUFort: a secure architecture of DNN accelerator against model inversion attack. In Proceedings of the 16th ACM International Conference on Computing Frontiers.Google ScholarDigital Library
- Lingxiao Wei 2018. I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators. In Proceedings of the 34th Annual Computer Security Applications Conference.Google ScholarDigital Library
- Yannan N. Wu, Joel S. Emer, and Vivienne Sze. 2019. Accelergy: An Architecture-Level Energy Estimation Methodology for Accelerator Designs. In IEEE/ACM International Conference On Computer Aided Design (ICCAD).Google Scholar
- Dingqing Yang, Prashant J. Nair, and Mieszko Lis. 2023. HuffDuff: Stealing Pruned DNNs from Sparse Accelerators. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).Google ScholarDigital Library
- Hao Yang 2022. Rethinking feature uncertainty in stochastic neural networks for adversarial robustness. arXiv preprint arXiv:2201.00148 (2022).Google Scholar
- Ashkan Yousefpour 2021. Opacus: User-Friendly Differential Privacy Library in PyTorch. arXiv preprint arXiv:2109.12298 (2021).Google Scholar
- Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2017. Adversarial Examples: Attacks and Defenses for Deep Learning. https://doi.org/10.48550/ARXIV.1712.07107Google ScholarCross Ref
- Guanglie Zhang 2005. Ziggurat-based hardware Gaussian random number generator. In Proceedings of the IEEE International Conference on Field Programmable Logic and Applications.Google Scholar
- Huan Zhang 2019. The Limitations of Adversarial Training and the Blind-Spot Attack. In Proceedings of the International Conference on Learning Representations.Google Scholar
Index Terms
- DINAR: Enabling Distribution Agnostic Noise Injection in Machine Learning Hardware
Recommendations
Machine Learning for Hardware Security: Opportunities and Risks
Recently, machine learning algorithms have been utilized by system defenders and attackers to secure and attack hardware, respectively. In this work, we investigate the impact of machine learning on hardware security. We explore the defense and attack ...
Moving Target Defense against Adversarial Machine Learning
MTD '21: Proceedings of the 8th ACM Workshop on Moving Target DefenseAs Machine Learning (ML) models are increasingly employed in a number of applications across a multitude of fields, the threat of adversarial attacks against ML models is also increasing. Adversarial samples crafted via specialized attack algorithms ...
Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges
AbstractFederated learning is a machine learning paradigm that emerges as a solution to the privacy-preservation demands in artificial intelligence. As machine learning, federated learning is threatened by adversarial attacks against the ...
Highlights- We claim that adversarial attacks are a significant challenge in federated learning.
Comments