skip to main content
keynote

Systemization of Knowledge: Robust Deep Learning using Hardware-software co-design in Centralized and Federated Settings

Published:16 October 2023Publication History
Skip Abstract Section

Abstract

Deep learning (DL) models are enabling a significant paradigm shift in a diverse range of fields, including natural language processing and computer vision, as well as the design and automation of complex integrated circuits. While the deep models – and optimizations based on them, e.g., Deep Reinforcement Learning (RL) – demonstrate a superior performance and a great capability for automated representation learning, earlier works have revealed the vulnerability of DL to various attacks. The vulnerabilities include adversarial samples, model poisoning, and fault injection attacks. On the one hand, these security threats could divert the behavior of the DL model and lead to incorrect decisions in critical tasks. On the other hand, the susceptibility of DL to potential attacks might thwart trustworthy technology transfer as well as reliable DL deployment. In this work, we investigate the existing defense techniques to protect DL against the above-mentioned security threats. Particularly, we review end-to-end defense schemes for robust deep learning in both centralized and federated learning settings. Our comprehensive taxonomy and horizontal comparisons reveal an important fact that defense strategies developed using DL/software/hardware co-design outperform the DL/software-only counterparts and show how they can achieve very efficient and latency-optimized defenses for real-world applications. We believe our systemization of knowledge sheds light on the promising performance of hardware-software co-design of DL security methodologies and can guide the development of future defenses.

REFERENCES

  1. [1] Adebayo Julius, Gilmer Justin, Muelly Michael, Goodfellow Ian, Hardt Moritz, and Kim Been. 2018. Sanity checks for saliency maps. Advances in Neural Information Processing Systems 31 (2018).Google ScholarGoogle Scholar
  2. [2] Akhtar Naveed and Mian Ajmal. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6 (2018), 1441014430.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Allen-Zhu Zeyuan, Ebrahimianghazani Faeze, Li Jerry, and Alistarh Dan. 2020. Byzantine-resilient non-convex stochastic gradient descent. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  4. [4] Alzantot Moustafa, Balaji Bharathan, and Srivastava Mani B.. 2018. Did you hear that? Adversarial examples against automatic speech recognition. CoRR abs/1801.00554 (2018). arxiv:1801.00554 http://arxiv.org/abs/1801.00554Google ScholarGoogle Scholar
  5. [5] Andreina Sebastien, Marson Giorgia Azzurra, Möllering Helen, and Karame Ghassan. 2021. BaFFLe: Backdoor detection via feedback-based federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS’21). IEEE, 852863.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Athalye Anish, Carlini Nicholas, and Wagner David. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018.Google ScholarGoogle Scholar
  7. [7] Athalye Anish, Engstrom Logan, Ilyas Andrew, and Kwok Kevin. 2018. Synthesizing robust adversarial examples. In Proceedings of the 35th International Conference on Machine Learning.Google ScholarGoogle Scholar
  8. [8] Bagdasaryan Eugene, Veit Andreas, Hua Yiqing, Estrin Deborah, and Shmatikov Vitaly. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 29382948.Google ScholarGoogle Scholar
  9. [9] Bell James Henry, Bonawitz Kallista A., Gascón Adrià, Lepoint Tancrède, and Raykova Mariana. 2020. Secure single-server aggregation with (poly) logarithmic overhead. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 12531269.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Blanchard Peva, Mhamdi El Mahdi El, Guerraoui Rachid, and Stainer Julien. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems 30 (2017).Google ScholarGoogle Scholar
  11. [11] Bonawitz Keith, Ivanov Vladimir, Kreuter Ben, Marcedone Antonio, McMahan H. Brendan, Patel Sarvar, Ramage Daniel, Segal Aaron, and Seth Karn. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 11751191.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Boussetta Amine, El-Mhamdi El-Mahdi, Guerraoui Rachid, Maurer Alexandre, and Rouault Sébastien. 2021. AKSEL: Fast Byzantine SGD. In 24th International Conference on Principles of Distributed Systems (OPODIS’20).Google ScholarGoogle Scholar
  13. [13] Breier Jakub, Hou Xiaolu, Jap Dirmanto, Ma Lei, Bhasin Shivam, and Liu Yang. 2018. Practical fault attack on deep neural networks. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 22042206.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Cahyawijaya Samuel. 2021. Greenformers: Improving computation and memory efficiency in transformer models via low-rank approximation. arXiv preprint arXiv:2108.10808 (2021).Google ScholarGoogle Scholar
  15. [15] Cai Han, Lin Ji, Lin Yujun, Liu Zhijian, Tang Haotian, Wang Hanrui, Zhu Ligeng, and Han Song. 2022. Enable deep learning on mobile devices: Methods, systems, and applications. ACM Transactions on Design Automation of Electronic Systems (TODAES) 27, 3 (2022), 150.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Carlini Nicholas, Mishra Pratyush, Vaidya Tavish, Zhang Yuankai, Sherr Micah, Shields Clay, Wagner David, and Zhou Wenchao. 2016. Hidden voice commands. In 25th USENIX Security Symposium (USENIX Security’16). USENIX Association, Austin, TX.Google ScholarGoogle Scholar
  17. [17] Carlini Nicholas and Wagner David. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3957.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Carlini Nicholas and Wagner David. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW’18). IEEE, 17.Google ScholarGoogle Scholar
  19. [19] Chakraborty Anirban, Alam Manaar, Dey Vishal, Chattopadhyay Anupam, and Mukhopadhyay Debdeep. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069 (2018).Google ScholarGoogle Scholar
  20. [20] Chang Jung-Woo, Javaheripi Mojan, Hidano Seira, and Koushanfar Farinaz. 2023. RoVISQ: Reduction of video service quality via adversarial attacks on deep learning-based video compression. In Network and Distributed System Security Symposium (NDSS’23).Google ScholarGoogle Scholar
  21. [21] Chen Bryant, Carvalho Wilka, Baracaldo Nathalie, Ludwig Heiko, Edwards Benjamin, Lee Taesung, Molloy Ian, and Srivastava Biplav. 2018. Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728 (2018).Google ScholarGoogle Scholar
  22. [22] Chen Huili, Fu Cheng, Zhao Jishen, and Koushanfar Farinaz. 2019. DeepInspect: A black-box Trojan detection and mitigation framework for deep neural networks. In IJCAI, Vol. 2. 8.Google ScholarGoogle Scholar
  23. [23] Chen Huili, Fu Cheng, Zhao Jishen, and Koushanfar Farinaz. 2021. ProFlip: Targeted Trojan attack with progressive bit flips. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 77187727.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Chen Yudong, Su Lili, and Xu Jiaming. 2017. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems 1, 2 (2017), 125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Chen Yuxuan, Yuan Xuejing, Zhang Jiangshan, Zhao Yue, Zhang Shengzhi, Chen Kai, and Wang XiaoFeng. 2020. Devil’s whisper: A general approach for physical adversarial attacks against commercial black-box speech recognition devices. In 29th USENIX Security Symposium (USENIX Security’20). USENIX Association, Boston, MA.Google ScholarGoogle Scholar
  26. [26] Cheng Yupeng, Wei Xingxing, Fu Huazhu, Lin Shang-Wei, and Lin Weisi. 2021. Defense for adversarial videos by self-adaptive JPEG compression and optical texture. In Proceedings of the 2nd ACM International Conference on Multimedia in Asia (Virtual Event, Singapore) (MMAsia’20). Association for Computing Machinery, New York, NY, USA, Article 55, 7 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Chou Edward, Tramer Florian, and Pellegrino Giancarlo. 2020. SentiNet: Detecting localized universal attacks against deep learning systems. In 2020 IEEE Security and Privacy Workshops (SPW’20). IEEE, 4854.Google ScholarGoogle Scholar
  28. [28] Chowdhury Amrita Roy, Guo Chuan, Jha Somesh, and Maaten Laurens van der. 2021. EIFFeL: Ensuring integrity for federated learning. arXiv preprint arXiv:2112.12727 (2021).Google ScholarGoogle Scholar
  29. [29] Cohen Jeremy, Rosenfeld Elan, and Kolter Zico. 2019. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning. PMLR, 13101320.Google ScholarGoogle Scholar
  30. [30] Damaskinos Georgios, Mhamdi El Mahdi El, Guerraoui Rachid, Guirguis Arsany Hany Abdelmessih, and Rouault Sébastien Louis Alexandre. 2019. AGGREGATHOR: Byzantine machine learning via robust gradient aggregation. In The Conference on Systems and Machine Learning (SysML’19).Google ScholarGoogle Scholar
  31. [31] Doan Bao Gia, Abbasnejad Ehsan, and Ranasinghe Damith C.. 2020. Februus: Input purification defense against Trojan attacks on deep neural network systems. In Annual Computer Security Applications Conference. 897912.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Dong Yinpeng, Yang Xiao, Deng Zhijie, Pang Tianyu, Xiao Zihao, Su Hang, and Zhu Jun. 2021. Black-box detection of backdoor attacks with limited information and data. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1648216491.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Mhamdi El Mahdi El, Guerraoui Rachid, and Rouault Sébastien Louis Alexandre. 2021. Distributed momentum for Byzantine-resilient stochastic gradient descent. In 9th International Conference on Learning Representations (ICLR’21).Google ScholarGoogle Scholar
  34. [34] Fields Greg, Samragh Mohammad, Javaheripi Mojan, Koushanfar Farinaz, and Javidi Tara. 2021. Trojan signatures in DNN weights. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1220.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Floridi Luciano and Chiriatti Massimo. 2020. GPT-3: Its nature, scope, limits, and consequences. Minds and Machines 30, 4 (2020), 681694.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Flowers Bryse, Buehrer R. Michael, and Headley William C.. 2019. Evaluating adversarial evasion attacks in the context of wireless communications. IEEE Transactions on Information Forensics and Security 15 (2019), 11021113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Fong Ruth C. and Vedaldi Andrea. 2017. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision. 34293437.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Freitas Scott, Chen Shang-Tse, Wang Zijie J., and Chau Duen Horng. 2020. Unmask: Adversarial detection and defense through robust feature alignment. In 2020 IEEE International Conference on Big Data (Big Data’20). IEEE, 10811088.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Fu Yonggan, Zhao Yang, Yu Qixuan, Li Chaojian, and Lin Yingyan. 2021. 2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency. In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture. 225237.Google ScholarGoogle Scholar
  40. [40] Gan Yiming, Qiu Yuxian, Leng Jingwen, Guo Minyi, and Zhu Yuhao. 2020. Ptolemy: Architecture support for robust deep learning. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’20). IEEE, 241255.Google ScholarGoogle Scholar
  41. [41] Gao Ji, Lanchantin Jack, Soffa Mary Lou, and Qi Yanjun. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW’18). IEEE, 5056.Google ScholarGoogle Scholar
  42. [42] Gao Yansong, Doan Bao Gia, Zhang Zhi, Ma Siqi, Zhang Jiliang, Fu Anmin, Nepal Surya, and Kim Hyoungshick. 2020. Backdoor attacks and countermeasures on deep learning: A comprehensive review. arXiv preprint arXiv:2007.10760 (2020).Google ScholarGoogle Scholar
  43. [43] Zahra Ghodsi, Mojan Javaheripi, Nojan Sheybani, Xinqiao Zhang, Ke Huang, and Farinaz Koushanfar. 2022. zPROBE: Zero peek robustness checks for federated learning. arXiv preprint arXiv:2206.12100 (2022).Google ScholarGoogle Scholar
  44. [44] Giulivi Loris, Jere Malhar, Rossi Loris, Koushanfar Farinaz, Ciocarlie Gabriela, Hitaj Briland, and Boracchi Giacomo. 2023. Adversarial scratches: Deployable attacks to CNN classifiers. Pattern Recognition 133 (2023), 108985. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Gong Zhitao, Wang Wenlu, and Ku Wei-Shinn. 2017. Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960 (2017).Google ScholarGoogle Scholar
  46. [46] Goodfellow Ian J., Shlens Jonathon, and Szegedy Christian. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).Google ScholarGoogle Scholar
  47. [47] Goodfellow Ian J., Shlens Jonathon, and Szegedy Christian. 2015. Explaining and harnessing adversarial examples. Stat. (2015).Google ScholarGoogle Scholar
  48. [48] Grosse Kathrin, Manoharan Praveen, Papernot Nicolas, Backes Michael, and McDaniel Patrick. 2017. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017).Google ScholarGoogle Scholar
  49. [49] Gu Tianyu, Dolan-Gavitt Brendan, and Garg Siddharth. 2017. BadNets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017).Google ScholarGoogle Scholar
  50. [50] Gu Tianyu, Liu Kang, Dolan-Gavitt Brendan, and Garg Siddharth. 2019. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 4723047244.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Guo Wei, Tondi Benedetta, and Barni Mauro. 2022. An overview of backdoor attacks against deep neural networks and possible defences. IEEE Open Journal of Signal Processing (2022).Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Guo Wenbo, Wang Lun, Xing Xinyu, Du Min, and Song Dawn. 2019. Tabor: A highly accurate approach to inspecting and restoring Trojan backdoors in AI systems. arXiv preprint arXiv:1908.01763 (2019).Google ScholarGoogle Scholar
  53. [53] Hayase Jonathan and Kong Weihao. 2020. Spectre: Defending against backdoor attacks using robust covariance estimation. In International Conference on Machine Learning.Google ScholarGoogle Scholar
  54. [54] He Zhezhi, Rakin Adnan Siraj, Li Jingtao, Chakrabarti Chaitali, and Fan Deliang. 2020. Defending and harnessing the bit-flip based adversarial weight attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1409514103.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Hong Sanghyun, Frigo Pietro, Kaya Yiğitcan, Giuffrida Cristiano, and Dumitraș Tudor. 2019. Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks. In 28th \(\lbrace\)USENIX\(\rbrace\) Security Symposium (\(\lbrace\)USENIX\(\rbrace\) Security 19). 497514.Google ScholarGoogle Scholar
  56. [56] Huang Hanxun, Wang Yisen, Erfani Sarah, Gu Quanquan, Bailey James, and Ma Xingjun. 2021. Exploring architectural ingredients of adversarially robust deep neural networks. Advances in Neural Information Processing Systems 34 (2021), 55455559.Google ScholarGoogle Scholar
  57. [57] Hussain Shehzeen, Neekhara Paarth, Dolhansky Brian, Bitton Joanna, Ferrer Cristian Canton, McAuley Julian, and Koushanfar Farinaz. 2022. Exposing vulnerabilities of deepfake detection systems with robust attacks. Digital Threats 3, 3, Article 30 (Sep.2022), 23 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Hussain Shehzeen, Neekhara Paarth, Dubnov Shlomo, McAuley Julian, and Koushanfar Farinaz. 2021. \(\lbrace\)WaveGuard\(\rbrace\): Understanding and mitigating audio adversarial examples. In 30th USENIX Security Symposium (USENIX Security’21). 22732290.Google ScholarGoogle Scholar
  59. [59] Hussain Shehzeen, Neekhara Paarth, Jere Malhar, Koushanfar Farinaz, and McAuley Julian. 2021. Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 33483357.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Hussain Shehzeen, Sheybani Nojan, Neekhara Paarth, Zhang Xinqiao, Duarte Javier, and Koushanfar Farinaz. 2022. FastStamp: Accelerating neural steganography and digital watermarking of images on FPGAs. In 2022 IEEE/ACM International Conference on Computer Aided Design (ICCAD’22). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Iter Dan, Huang Jade, and Jermann Mike. 2017. Generating Adversarial Examples for Speech Recognition. Technical Report.Google ScholarGoogle Scholar
  62. [62] Javaheripi Mojan, Chang Jung-Woo, and Koushanfar Farinaz. 2022. AccHashtag: Accelerated hashing for detecting fault-injection attacks on embedded neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC) (2022).Google ScholarGoogle Scholar
  63. [63] Javaheripi Mojan, Rosa Gustavo H. de, Mukherjee Subhabrata, Shah Shital, Religa Tomasz L., Mendes Caio C. T., Bubeck Sebastien, Koushanfar Farinaz, and Dey Debadeepta. 2022. LiteTransformerSearch: Training-free on-device search for efficient autoregressive language models. Advances in Neural Information Processing Systems (2022).Google ScholarGoogle Scholar
  64. [64] Javaheripi Mojan and Koushanfar Farinaz. 2021. HASHTAG: Hash signatures for online detection of fault-injection attacks on deep neural networks. In 2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD’21). IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Javaheripi Mojan, Samragh Mohammad, Fields Gregory, Javidi Tara, and Koushanfar Farinaz. 2020. CleaNN: Accelerated Trojan shield for embedded neural networks. In 2020 IEEE/ACM International Conference on Computer Aided Design (ICCAD’20). IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Javaheripi Mojan, Samragh Mohammad, Rouhani Bita Darvish, Javidi Tara, and Koushanfar Farinaz. 2020. CuRTAIL: Characterizing and thwarting adversarial deep learning. IEEE Transactions on Dependable and Secure Computing 18, 2 (2020), 736752.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Jebreel Najeeb and Domingo-Ferrer Josep. 2022. FL-Defender: Combating targeted attacks in federated learning. arXiv preprint arXiv:2207.00872 (2022).Google ScholarGoogle Scholar
  68. [68] Jere Malhar S., Farnan Tyler, and Koushanfar Farinaz. 2020. A taxonomy of attacks on federated learning. IEEE Security & Privacy 19, 2 (2020), 2028.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Jha Saurabh, Banerjee Subho S., Cyriac James, Kalbarczyk Zbigniew T., and Iyer Ravishankar K.. 2018. AVFI: Fault injection for autonomous vehicles. In 2018 48th Annual IEEE/IFIPInternational Conference on Dependable Systems and Networks Workshops (DSN-W’18). IEEE, 5556.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Karimireddy Sai Praneeth, He Lie, and Jaggi Martin. 2021. Learning from history for Byzantine robust optimization. In International Conference on Machine Learning. PMLR, 53115319.Google ScholarGoogle Scholar
  71. [71] Kim Yoongu, Daly Ross, Kim Jeremie, Fallin Chris, Lee Ji Hye, Lee Donghyuk, Wilkerson Chris, Lai Konrad, and Mutlu Onur. 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. ACM SIGARCH Computer Architecture News 42, 3 (2014), 361372.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Kolouri Soheil, Saha Aniruddha, Pirsiavash Hamed, and Hoffmann Heiko. 2020. Universal litmus patterns: Revealing backdoor attacks in CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 301310.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2017. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 8490.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Kurakin Alexey, Goodfellow Ian J., and Bengio Samy. 2016. Adversarial machine learning at scale. CoRR (2016). arXiv:1611.01236Google ScholarGoogle Scholar
  75. [75] Lan Yazhu, Nixon Kent W., Guo Qingli, Zhang Guohe, Xu Yuanchao, Li Hai, and Chen Yiran. 2020. FCDM: A methodology based on sensor pattern noise fingerprinting for fast confidence detection to adversarial attacks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 39, 12 (2020), 47914804.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] LeCun Yann, Bengio Yoshua, and Hinton Geoffrey. 2015. Deep learning. Nature 521, 7553 (2015).Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Li Jinfeng, Ji Shouling, Du Tianyu, Li Bo, and Wang Ting. 2018. TextBugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271 (2018).Google ScholarGoogle Scholar
  78. [78] Li Jingtao, Rakin Adnan Siraj, He Zhezhi, Fan Deliang, and Chakrabarti Chaitali. 2021. RADAR: Run-time adversarial weight attack detection and accuracy recovery. arXiv preprint arXiv:2101.08254 (2021).Google ScholarGoogle Scholar
  79. [79] Li Jingtao, Rakin Adnan Siraj, Xiong Yan, Chang Liangliang, He Zhezhi, Fan Deliang, and Chakrabarti Chaitali. 2020. Defending bit-flip attack through DNN weight reconstruction. In 2020 57th ACM/IEEE Design Automation Conference (DAC’20). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Li Shasha, Aich Abhishek, Zhu Shitong, Asif Salman, Song Chengyu, Roy-Chowdhury Amit, and Krishnamurthy Srikanth. 2021. Adversarial attacks on black box video classifiers: Leveraging the power of geometric transformations. Advances in Neural Information Processing Systems 34 (2021).Google ScholarGoogle Scholar
  81. [81] Li Suyi, Cheng Yong, Wang Wei, Liu Yang, and Chen Tianjian. 2020. Learning to detect malicious clients for robust federated learning. arXiv preprint arXiv:2002.00211 (2020).Google ScholarGoogle Scholar
  82. [82] Li Shasha, Neupane Ajaya, Paul Sujoy, Song Chengyu, Krishnamurthy Srikanth V., Chowdhury Amit K. Roy, and Swami Ananthram. 2019. Stealthy adversarial perturbations against real-time video classification systems. In Proceedings 2019 Network and Distributed System Security Symposium.Google ScholarGoogle ScholarCross RefCross Ref
  83. [83] Li Yiming, Jiang Yong, Li Zhifeng, and Xia Shu-Tao. 2022. Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems (2022).Google ScholarGoogle Scholar
  84. [84] Li Yu, Li Min, Luo Bo, Tian Ye, and Xu Qiang. 2020. DeepDyve: Dynamic verification for deep neural networks. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 101112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. [85] Li Zhuohang, Shi Cong, Zhang Tianfang, Xie Yi, Liu Jian, Yuan Bo, and Chen Yingying. 2021. Robust detection of machine-induced audio attacks in intelligent audio systems with microphone array. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 18841899.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. [86] Liu Qi, Wen Wujie, and Wang Yanzhi. 2020. Concurrent weight encoding-based detection for bit-flip attack on neural network accelerators. In Proceedings of the 39th International Conference on Computer-Aided Design. 18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. [87] Liu Yingqi, Lee Wen-Chuan, Tao Guanhong, Ma Shiqing, Aafer Yousra, and Zhang Xiangyu. 2019. ABS: Scanning neural networks for back-doors by artificial brain stimulation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 12651282.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. [88] Liu Yingqi, Ma Shiqing, Aafer Yousra, Lee Wen-Chuan, Zhai Juan, Wang Weihang, and Zhang Xiangyu. 2017. Trojaning attack on neural networks. (2017).Google ScholarGoogle Scholar
  89. [89] Liu Yingqi, Ma Shiqing, Aafer Yousra, Lee Wen-Chuan, Zhai Juan, Wang Weihang, and Zhang X.. 2018. Trojaning attack on neural networks. In NDSS.Google ScholarGoogle Scholar
  90. [90] Liu Yannan, Wei Lingxiao, Luo Bo, and Xu Qiang. 2017. Fault injection attack on deep neural network. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD’17). IEEE, 131138.Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. [91] Lo Shao-Yuan and Patel Vishal M.. 2021. Defending against multiple and unforeseen adversarial videos. IEEE Transactions on Image Processing 31 (2021), 962973.Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. [92] Lu Yantao, Jia Yunhan, Wang Jianyu, Li Bai, Chai Weiheng, Carin Lawrence, and Velipasalar Senem. 2020. Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 940949.Google ScholarGoogle ScholarCross RefCross Ref
  93. [93] Ma Shiqing and Liu Yingqi. 2019. NIC: Detecting adversarial samples with neural network invariant checking. In Proceedings of the 26th Network and Distributed System Security Symposium (NDSS’19).Google ScholarGoogle ScholarCross RefCross Ref
  94. [94] Madry Aleksander, Makelov Aleksandar, Schmidt Ludwig, Tsipras Dimitris, and Vladu Adrian. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  95. [95] McMahan Brendan, Moore Eider, Ramage Daniel, Hampson Seth, and Arcas Blaise Aguera y. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 12731282.Google ScholarGoogle Scholar
  96. [96] Mehlman Nicholas, Sreeram Anirudh, Peri Raghuveer, and Narayanan Shrikanth S.. 2022. Mel frequency spectral domain defenses against adversarial attacks on speech recognition systems. ArXiv abs/2203.15283 (2022).Google ScholarGoogle Scholar
  97. [97] Moosavi-Dezfooli Seyed-Mohsen, Fawzi Alhussein, and Frossard Pascal. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 25742582.Google ScholarGoogle ScholarCross RefCross Ref
  98. [98] Mozes Maximilian, Stenetorp Pontus, Kleinberg Bennett, and Griffin Lewis D.. 2020. Frequency-guided word substitutions for detecting textual adversarial examples. arXiv preprint arXiv:2004.05887 (2020).Google ScholarGoogle Scholar
  99. [99] Nair Akarsh K., Raj Ebin Deni, and Sahoo Jayakrushna. 2023. A robust analysis of adversarial attacks on federated learning environments. Computer Standards & Interfaces (2023), 103723.Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. [100] Najafabadi Maryam M., Villanustre Flavio, Khoshgoftaar Taghi M., Seliya Naeem, Wald Randall, and Muharemagic Edin. 2015. Deep learning applications and challenges in big data analytics. Journal of Big Data 2, 1 (2015), 121.Google ScholarGoogle ScholarCross RefCross Ref
  101. [101] Naseri Mohammad, Hayes Jamie, and Cristofaro Emiliano De. 2020. Toward robustness and privacy in federated learning: Experimenting with local and central differential privacy. arXiv e-prints (2020), arXiv–2009.Google ScholarGoogle Scholar
  102. [102] Neekhara Paarth, Dolhansky Brian, Bitton Joanna, and Ferrer Cristian Canton. 2021. Adversarial threats to DeepFake detection: A practical perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 923932.Google ScholarGoogle ScholarCross RefCross Ref
  103. [103] Neekhara Paarth, Hussain Shehzeen, Pandey Prakhar, Dubnov Shlomo, McAuley Julian, and Koushanfar Farinaz. 2019. Universal adversarial perturbations for speech recognition systems. In Interspeech.Google ScholarGoogle Scholar
  104. [104] Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, and Thomas Schneider. 2022. FLAME: Taming backdoors in federated learning. In 31st USENIX Security Symposium (USENIX Security’22). 1415–1432.Google ScholarGoogle Scholar
  105. [105] Papernot Nicolas, McDaniel Patrick, Jha Somesh, Fredrikson Matt, Celik Z. Berkay, and Swami Ananthram. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 372387.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Pillutla Krishna, Kakade Sham M., and Harchaoui Zaid. 2019. Robust aggregation for federated learning. arXiv preprint arXiv:1912.13445 (2019).Google ScholarGoogle Scholar
  107. [107] Pony Roi, Naeh Itay, and Mannor Shie. 2021. Over-the-air adversarial flickering attacks against video recognition networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21). 515524.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Pruthi Danish, Dhingra Bhuwan, and Lipton Zachary C.. 2019. Combating adversarial misspellings with robust word recognition. arXiv preprint arXiv:1905.11268 (2019).Google ScholarGoogle Scholar
  109. [109] Qi Guo-Jun. 2020. Loss-sensitive generative adversarial networks on Lipschitz densities. International Journal of Computer Vision 128, 5 (2020), 11181140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. [110] Qiao Ximing, Yang Yukun, and Li Hai. 2019. Defending neural backdoors via generative distribution modeling. Advances in Neural Information Processing Systems 32 (2019).Google ScholarGoogle Scholar
  111. [111] Qin Yao, Carlini Nicholas, Cottrell Garrison, Goodfellow Ian, and Raffel Colin. 2019. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In International Conference on Machine Learning.Google ScholarGoogle Scholar
  112. [112] Rajaratnam Krishan, Shah Kunal, and Kalita Jugal. 2018. Isolated and ensemble audio preprocessing methods for detecting adversarial examples against automatic speech recognition. In Conference on Computational Linguistics and Speech Processing (ROCLING’18).Google ScholarGoogle Scholar
  113. [113] Rakin Adnan Siraj, He Zhezhi, and Fan Deliang. 2019. Bit-flip attack: Crushing neural network with progressive bit search. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 12111220.Google ScholarGoogle ScholarCross RefCross Ref
  114. [114] Rakin Adnan Siraj, He Zhezhi, and Fan Deliang. 2020. TBT: Targeted neural network attack with bit Trojan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1319813207.Google ScholarGoogle ScholarCross RefCross Ref
  115. [115] Rakin Adnan Siraj, He Zhezhi, Li Jingtao, Yao Fan, Chakrabarti Chaitali, and Fan Deliang. 2020. T-BFA: Targeted bit-flip adversarial weight attack. arXiv preprint arXiv:2007.12336 (2020).Google ScholarGoogle Scholar
  116. [116] Rakin Adnan Siraj, Yang Li, Li Jingtao, Yao Fan, Chakrabarti Chaitali, Cao Yu, Seo Jae-sun, and Fan Deliang. 2021. RA-BNN: Constructing robust & accurate binary neural network to simultaneously defend adversarial bit-flip attack and improve accuracy. arXiv preprint arXiv:2103.13813 (2021).Google ScholarGoogle Scholar
  117. [117] Razavi Kaveh, Gras Ben, Bosman Erik, Preneel Bart, Giuffrida Cristiano, and Bos Herbert. 2016. Flip feng shui: Hammering a needle in the software stack. In 25th \(\lbrace\)USENIX\(\rbrace\) Security Symposium (\(\lbrace\)USENIX\(\rbrace\) Security 16). 118.Google ScholarGoogle Scholar
  118. [118] Rieger Phillip, Nguyen Thien Duc, Miettinen Markus, and Sadeghi Ahmad-Reza. 2022. DeepSight: Mitigating backdoor attacks in federated learning through deep model inspection. arXiv preprint arXiv:2201.00763 (2022).Google ScholarGoogle Scholar
  119. [119] Rouhani Bita Darvish, Samragh Mohammad, Javaheripi Mojan, Javidi Tara, and Koushanfar Farinaz. 2018. DeepFense: Online accelerated defense against adversarial deep learning. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD’18). IEEE, 18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. [120] Ru Binxin, Cobb Adam, Blaas Arno, and Gal Yarin. 2019. BayesOpt adversarial attack. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  121. [121] Sadeghi Meysam and Larsson Erik G.. 2018. Adversarial attacks on deep-learning based radio signal classification. IEEE Wireless Communications Letters 8, 1 (2018), 213216.Google ScholarGoogle ScholarCross RefCross Ref
  122. [122] Schönherr Lea, Kohls Katharina, Zeiler Steffen, Holz Thorsten, and Kolossa Dorothea. 2018. Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. arXiv preprint arXiv:1808.05665 (2018).Google ScholarGoogle Scholar
  123. [123] Shen Guangyu, Liu Yingqi, Tao Guanhong, An Shengwei, Xu Qiuling, Cheng Siyuan, Ma Shiqing, and Zhang Xiangyu. 2021. Backdoor scanning for deep neural networks through k-arm optimization. In International Conference on Machine Learning. PMLR, 95259536.Google ScholarGoogle Scholar
  124. [124] Shinde Pramila P. and Shah Seema. 2018. A review of machine learning and deep learning applications. In 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA’18). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  125. [125] Simonyan Karen, Vedaldi Andrea, and Zisserman Andrew. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations.Google ScholarGoogle Scholar
  126. [126] So Jinhyun, Güler Başak, and Avestimehr A. Salman. 2020. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications (2020).Google ScholarGoogle Scholar
  127. [127] Su Jiawei, Vargas Danilo Vasconcellos, and Sakurai Kouichi. 2019. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23, 5 (2019), 828841.Google ScholarGoogle ScholarCross RefCross Ref
  128. [128] Szegedy Christian, Zaremba Wojciech, Sutskever Ilya, Bruna Joan, Erhan Dumitru, Goodfellow Ian, and Fergus Rob. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).Google ScholarGoogle Scholar
  129. [129] Tang Ruixiang, Du Mengnan, Liu Ninghao, Yang Fan, and Hu Xia. 2020. An embarrassingly simple approach for Trojan attack in deep neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 218228.Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. [130] Tropp Joel A. and Gilbert Anna C.. 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory 53, 12 (2007), 46554666.Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. [131] Truex Stacey, Baracaldo Nathalie, Anwar Ali, Steinke Thomas, Ludwig Heiko, Zhang Rui, and Zhou Yi. 2019. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. 111.Google ScholarGoogle ScholarDigital LibraryDigital Library
  132. [132] Vaidya Tavish, Zhang Yuankai, Sherr Micah, and Shields Clay. 2015. Cocaine Noodles: Exploiting the gap between human and machine speech recognition. In 9th USENIX Workshop on Offensive Technologies (WOOT’15). USENIX Association, Washington, D.C.Google ScholarGoogle Scholar
  133. [133] Veen Victor van der, Fratantonio Yanick, Lindorfer Martina, Gruss Daniel, Maurice Clémentine, Vigna Giovanni, Bos Herbert, Razavi Kaveh, and Giuffrida Cristiano. 2016. Drammer: Deterministic rowhammer attacks on mobile platforms. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 16751689.Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. [134] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017).Google ScholarGoogle Scholar
  135. [135] Velicheti Raj Kiriti, Xia Derek, and Koyejo Oluwasanmi. 2021. Secure Byzantine-robust distributed learning via clustering. arXiv preprint arXiv:2110.02940 (2021).Google ScholarGoogle Scholar
  136. [136] Wang Bolun, Yao Yuanshun, Shan Shawn, Li Huiying, Viswanath Bimal, Zheng Haitao, and Zhao Ben Y.. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP’19). IEEE, 707723.Google ScholarGoogle Scholar
  137. [137] Wang Hongyi, Sreenivasan Kartik, Rajput Shashank, Vishwakarma Harit, Agarwal Saurabh, Sohn Jy-yong, Lee Kangwook, and Papailiopoulos Dimitris. 2020. Attack of the tails: Yes, you really can backdoor federated learning. Advances in Neural Information Processing Systems 33 (2020), 1607016084.Google ScholarGoogle Scholar
  138. [138] Wang Jingkang, Zhang Tianyun, Liu Sijia, Chen Pin-Yu, Xu Jiacen, Fardad Makan, and Li Bo. 2021. Adversarial attack generation empowered by min-max optimization. Advances in Neural Information Processing Systems 34 (2021), 1602016033.Google ScholarGoogle Scholar
  139. [139] Wang Si, Liu Wenye, and Chang Chip-Hong. 2021. A new lightweight in situ adversarial sample detector for edge deep neural network. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 11, 2 (2021), 252266.Google ScholarGoogle ScholarCross RefCross Ref
  140. [140] Wang Xingbin, Hou Rui, Zhao Boyan, Yuan Fengkai, Zhang Jun, Meng Dan, and Qian Xuehai. 2020. DNNGuard: An elastic heterogeneous DNN accelerator architecture against adversarial attacks. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems. 1934.Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. [141] Wang Xiaosen, Yifeng Xiong, and He Kun. 2022. Detecting textual adversarial examples through randomized substitution and vote. In Uncertainty in Artificial Intelligence. PMLR, 20562065.Google ScholarGoogle Scholar
  142. [142] Wang Zhilin, Kang Qiao, Zhang Xinyi, and Hu Qin. 2022. Defense strategies toward model poisoning attacks in federated learning: A survey. In 2022 IEEE Wireless Communications and Networking Conference (WCNC’22). IEEE, 548553.Google ScholarGoogle ScholarDigital LibraryDigital Library
  143. [143] Wei Kang, Li Jun, Ding Ming, Ma Chuan, Yang Howard H., Farokhi Farhad, Jin Shi, Quek Tony Q. S., and Poor H. Vincent. 2020. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15 (2020), 34543469.Google ScholarGoogle ScholarDigital LibraryDigital Library
  144. [144] Wei Xingxing, Zhu Jun, Yuan Sha, and Su Hang. 2019. Sparse adversarial perturbations for videos. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 89738980.Google ScholarGoogle ScholarDigital LibraryDigital Library
  145. [145] Wu Chen, Yang Xian, Zhu Sencun, and Mitra Prasenjit. 2020. Mitigating backdoor attacks in federated learning. arXiv preprint arXiv:2011.01767 (2020).Google ScholarGoogle Scholar
  146. [146] Wu Weibin, Su Yuxin, Chen Xixian, Zhao Shenglin, King Irwin, Lyu Michael R., and Tai Yu-Wing. 2020. Boosting the transferability of adversarial samples via attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11611170.Google ScholarGoogle ScholarCross RefCross Ref
  147. [147] Xiao Chaowei, Deng Ruizhi, Li Bo, Lee Taesung, Edwards Benjamin, Yi Jinfeng, Song Dawn, Liu Mingyan, and Molloy Ian. 2019. AdvIT: Adversarial frames identifier based on temporal consistency in videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 39683977.Google ScholarGoogle ScholarCross RefCross Ref
  148. [148] Xiao Chaowei, Deng Ruizhi, Li Bo, Yu Fisher, Liu Mingyan, and Song Dawn. 2018. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV’18). 217234.Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. [149] Xie Chulin, Chen Minghao, Chen Pin-Yu, and Li Bo. 2021. CRFL: Certifiably robust federated learning against backdoor attacks. In International Conference on Machine Learning. PMLR, 1137211382.Google ScholarGoogle Scholar
  150. [150] Xie Chulin, Huang Keli, Chen Pin-Yu, and Li Bo. 2019. DBA: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  151. [151] Xie Cong, Koyejo Oluwasanmi, and Gupta Indranil. 2018. Generalized Byzantine-tolerant SGD. arXiv preprint arXiv:1802.10116 (2018).Google ScholarGoogle Scholar
  152. [152] Xie Shangyu, Wang Han, Kong Yu, and Hong Yuan. 2022. Universal 3-Dimensional perturbations for black-box attacks on video recognition systems. In 2022 IEEE Symposium on Security and Privacy (SP’22).Google ScholarGoogle Scholar
  153. [153] Xu Qian, Arafin Md. Tanvir, and Qu Gang. 2021. Security of neural networks from hardware perspective: A survey and beyond. In Proceedings of the 26th Asia and South Pacific Design Automation Conference. 449454.Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. [154] Xu Xiaojun, Wang Qi, Li Huichen, Borisov Nikita, Gunter Carl A., and Li Bo. 2021. Detecting AI Trojans using meta neural analysis. In 2021 IEEE Symposium on Security and Privacy (SP’21). IEEE, 103120.Google ScholarGoogle Scholar
  155. [155] Yakura Hiromu and Sakuma Jun. 2018. Robust audio adversarial example for a physical attack. CoRR abs/1810.11793 (2018). arxiv:1810.11793 http://arxiv.org/abs/1810.11793Google ScholarGoogle Scholar
  156. [156] Yang Chien-Sheng, So Jinhyun, He Chaoyang, Li Songze, Yu Qian, and Avestimehr Salman. 2021. LightSecAgg: Rethinking secure aggregation in federated learning. arXiv preprint arXiv:2109.14236 (2021).Google ScholarGoogle Scholar
  157. [157] Yang Zhuolin, Chen Pin Yu, Li Bo, and Song Dawn. 2019. Characterizing audio adversarial examples using temporal dependency. In 7th International Conference on Learning Representations, ICLR 2019.Google ScholarGoogle Scholar
  158. [158] Ye Dengpan, Chen Chuanxi, Liu Changrui, Wang Hao, and Jiang Shunzhi. 2021. Detection defense against adversarial attacks with saliency map. International Journal of Intelligent Systems (2021).Google ScholarGoogle Scholar
  159. [159] Yin Dong, Chen Yudong, Kannan Ramchandran, and Bartlett Peter. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR, 56505659.Google ScholarGoogle Scholar
  160. [160] Yuan Xuejing, Chen Yuxuan, Zhao Yue, Long Yunhui, Liu Xiaokang, Chen Kai, Zhang Shengzhi, Huang Heqing, Wang XiaoFeng, and Gunter Carl A.. 2018. CommanderSong: A systematic approach for practical adversarial voice recognition. In 27th \(\lbrace\)USENIX\(\rbrace\) Security Symposium (\(\lbrace\)USENIX\(\rbrace\) Security 18).Google ScholarGoogle Scholar
  161. [161] Zhang Ruisi, Hidano Seira, and Koushanfar Farinaz. 2022. Text revealer: Private text reconstruction via model inversion attacks against transformers. arXiv preprint arXiv:2209.10505 (2022).Google ScholarGoogle Scholar
  162. [162] Zhang Xinqiao, Chen Huili, and Koushanfar Farinaz. 2021. TAD: Trigger approximation based black-box Trojan detection for AI. arXiv preprint arXiv:2102.01815 (2021).Google ScholarGoogle Scholar
  163. [163] Zhu Bin, Gu Zhaoquan, Wang Le, and Tian Zhihong. 2021. TREATED: Towards universal defense against textual adversarial attacks. arXiv preprint arXiv:2109.06176 (2021).Google ScholarGoogle Scholar

Index Terms

  1. Systemization of Knowledge: Robust Deep Learning using Hardware-software co-design in Centralized and Federated Settings

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Design Automation of Electronic Systems
          ACM Transactions on Design Automation of Electronic Systems  Volume 28, Issue 6
          November 2023
          404 pages
          ISSN:1084-4309
          EISSN:1557-7309
          DOI:10.1145/3627977
          Issue’s Table of Contents

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 16 October 2023
          • Online AM: 23 August 2023
          • Accepted: 12 August 2023
          • Revised: 10 May 2023
          • Received: 14 October 2022
          Published in todaes Volume 28, Issue 6

          Check for updates

          Qualifiers

          • keynote
        • Article Metrics

          • Downloads (Last 12 months)431
          • Downloads (Last 6 weeks)63

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text