ABSTRACT
A backdoor attack is executed by injecting a few poisoned samples into the training dataset of Deep Neural Networks (DNNs), enabling attackers to implant a hidden manipulation. This manipulation can be triggered during inference to exhibit controlled behavior, posing risks in real-world deployments. In this paper, we specifically focus on the safety-critical task of pedestrian detection and propose a novel backdoor trigger by exploiting the Moiré effect. The Moiré effect, a common physical phenomenon, disrupts camera-captured images by introducing Moiré patterns and unavoidable interference. Our method comprises three key steps. Firstly, we analyze the Moiré effect's cause and simulate its patterns on pedestrians' clothing. Next, we embed these Moiré patterns as a backdoor trigger into digital images and use this dataset to train a backdoored detector. Finally, we physically test the trained detector by wearing clothing that generates Moiré patterns. We demonstrate that individuals wearing such clothes can effectively evade detection by the backdoored model while wearing regular clothes does not trigger the attack, ensuring the attack remains covert. Extensive experiments in both digital and physical spaces thoroughly demonstrate the effectiveness and efficacy of our proposed Moiré Backdoor Attack.
Supplemental Material
- Itzhak Amidror, Sylvain Chosson, and RD Hersch. 2007. Moiré methods for the protection of documents and products: A short survey. In Journal of Physics: Conference Series, Vol. 77. IOP Publishing, 012001.Google Scholar
- Victor J Cadarso, Sylvain Chosson, Katrin Sidler, Roger D Hersch, and Jürgen Brugger. 2013. High-resolution 1D moirés as counterfeit security features. Light: Science & Applications, Vol. 2, 7 (2013), e86--e86.Google ScholarCross Ref
- Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017).Google Scholar
- Khoa Doan, Yingjie Lao, and Ping Li. 2021a. Backdoor attack with imperceptible input and latent modification. Advances in Neural Information Processing Systems, Vol. 34 (2021), 18944--18957.Google Scholar
- Khoa Doan, Yingjie Lao, Weijie Zhao, and Ping Li. 2021b. Lira: Learnable, imperceptible and robust backdoor attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11966--11976.Google ScholarCross Ref
- Khoa D Doan, Yingjie Lao, and Ping Li. 2022. Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class. arXiv preprint arXiv:2210.09194 (2022).Google Scholar
- Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, and Dacheng Tao. 2022. Fiba: Frequency-injection based backdoor attack in medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20876--20885.Google ScholarCross Ref
- Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, Vol. 7 (2019), 47230--47244.Google ScholarCross Ref
- Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. 2017. Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV) (Oct 2017).Google Scholar
- Glenn Jocher. 2020. YOLOv5 Detector. https://github.com/ultralytics/yolov5. Accessed: 2023-03--13.Google Scholar
- Kunyang Li, Yangui Zhou, Di Pan, Xueyan Ma, Hongqin Ma, Haowen Liang, and Jianying Zhou. 2018. Global control of colored moiré pattern in layered optical structures. Optics Communications, Vol. 414 (2018), 154--159.Google ScholarCross Ref
- Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, and Xinpeng Zhang. 2020. Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Transactions on Dependable and Secure Computing, Vol. 18, 5 (2020), 2088--2105.Google Scholar
- Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. 2022. Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems (2022).Google Scholar
- Yuezun Li, Yiming Li, Baoyuan Wu, Longkang Li, Ran He, and Siwei Lyu. 2021a. Invisible backdoor attack with sample-specific triggers. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 16463--16472.Google ScholarCross Ref
- Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. 2021b. Backdoor attack in the physical world. arXiv preprint arXiv:2104.02361 (2021).Google Scholar
- Junyu Lin, Lei Xu, Yingqi Liu, and Xiangyu Zhang. 2020. Composite backdoor attack for deep neural network by mixing existing benign features. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 113--131.Google ScholarDigital Library
- Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision.Google ScholarCross Ref
- Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. 2020. Reflection backdoor: A natural backdoor attack on deep neural networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part X 16. Springer, 182--199.Google ScholarDigital Library
- Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10012--10022.Google ScholarCross Ref
- Chengxiao Luo, Yiming Li, Yong Jiang, and Shu-Tao Xia. 2022. Untargeted Backdoor Attack against Object Detection. arXiv preprint arXiv:2211.05638 (2022).Google Scholar
- Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F Al-Sarawi, Nepal Surya, and Derek Abbott. 2022a. Dangerous cloaking: Natural trigger based backdoor attacks on object detectors in the physical world. arXiv preprint arXiv:2201.08619 (2022).Google Scholar
- Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F Al-Sarawi, Nepal Surya, and Derek Abbott. 2022b. MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World. arXiv preprint arXiv:2209.02339 (2022).Google Scholar
- Jingyi Ning, Lei Xie, Yi Li, Yingying Chen, Yanling Bu, Baoliu Ye, and Sanglu Lu. 2022. MoiréPose: ultra high precision camera-to-screen pose estimation based on Moiré pattern. In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. 106--119.Google ScholarDigital Library
- Dantong Niu, Ruohao Guo, and Yisen Wang. 2021. Morié attack (ma): A new potential risk of screen photos. Advances in Neural Information Processing Systems, Vol. 34 (2021), 26117--26129.Google Scholar
- Sotero Ordones, Manuel Servin, and John S Kang. 2021. Moire profilometry through simultaneous dual fringe projection for accurate phase demodulation: A comparative study. Applied Optics, Vol. 60, 28 (2021), 8667--8675.Google ScholarCross Ref
- Cecilia Pasquini and Rainer Böhme. 2020. Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition. EURASIP Journal on Information Security, Vol. 2020 (2020), 1--15.Google ScholarCross Ref
- Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, and Kai Bu. 2022. Towards practical deployment-stage backdoor attack on deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13347--13357.Google ScholarCross Ref
- Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2017. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (Jun 2017).Google ScholarDigital Library
- Artin Saberpour, Roger D Hersch, Jiajing Fang, Rhaleb Zayer, Hans-Peter Seidel, and Vahid Babaei. 2020. Fabrication of moiré on curved surfaces. Optics Express, Vol. 28, 13 (2020), 19413--19427.Google ScholarCross Ref
- Aniruddha Saha, Ajinkya Tejankar, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. 2022. Backdoor attacks on self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13337--13346.Google ScholarCross Ref
- Guanhong Tao, Guangyu Shen, Yingqi Liu, Shengwei An, Qiuling Xu, Shiqing Ma, Pan Li, and Xiangyu Zhang. 2022. Better trigger inversion optimization in backdoor scanning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13368--13378.Google ScholarCross Ref
- Glenn Tournier, Mario Valenti, Jonathan How, and Eric Feron. 2006. Estimation and control of a quadrotor vehicle using monocular vision and moire patterns. In AIAA Guidance, Navigation, and Control Conference and Exhibit. 6711.Google Scholar
- Alexander Turner, Dimitris Tsipras, and Aleksander Madry. 2019. Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771 (2019).Google Scholar
- M Vaidelys, J Ragulskiene, S Aleksiene, and M Ragulskis. 2015. Image hiding in time-averaged moiré gratings on finite element grids. Applied Mathematical Modelling, Vol. 39, 19 (2015), 5783--5790.Google ScholarCross Ref
- Lexy von Diezmann and Ofer Rog. 2021. Let's get physical--mechanisms of crossover interference. Journal of cell science, Vol. 134, 10 (2021), jcs255745.Google ScholarCross Ref
- Hui Wei, Hao Tang, Xuemei Jia, Hanxun Yu, Zhubo Li, Zhixiang Wang, Shin'ichi Satoh, and Zheng Wang. 2022b. Physical Adversarial Attack meets Computer Vision: A Decade Survey. arXiv preprint arXiv:2209.15179 (2022).Google Scholar
- Hui Wei, Zhixiang Wang, Xuemei Jia, Yinqiang Zheng, Hao Tang, Shin'ichi Satoh, and Zheng Wang. 2023. Hotcold block: Fooling thermal infrared detectors with a novel wearable design. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 15233--15241.Google ScholarDigital Library
- Xingxing Wei, Bangzheng Pu, Jiefan Lu, and Baoyuan Wu. 2022a. Physically Adversarial Attacks and Defenses in Computer Vision: A Survey. arXiv preprint arXiv:2211.01671 (2022).Google Scholar
- Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine Passananti, Emilio Andere, Heather Zheng, and Ben Zhao. 2022. Finding Naturally Occurring Physical Backdoors in Image Datasets. Advances in Neural Information Processing Systems, Vol. 35 (2022), 22103--22116.Google Scholar
- Emily Wenger, Josephine Passananti, Arjun Nitin Bhagoji, Yuanshun Yao, Haitao Zheng, and Ben Y Zhao. 2021. Backdoor attacks against deep learning systems in the physical world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6206--6215.Google ScholarCross Ref
- Mingfu Xue, Can He, Yinghao Wu, Shichang Sun, Yushu Zhang, Jian Wang, and Weiqiang Liu. 2022. PTB: Robust physical backdoor attacks against deep neural networks in real world. Computers & Security, Vol. 118 (2022), 102726.Google ScholarDigital Library
- Cong Yang, Zhenyu Yang, Yan Ke, Tao Chen, Marcin Grzegorzek, and John See. 2023. Doing More With Moiré Pattern Detection in Digital Photos. IEEE Transactions on Image Processing, Vol. 32 (2023), 694--708.Google ScholarCross Ref
- Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, and Xiaojuan Qi. 2022. Towards efficient and scale-robust ultra-high-definition image demoiréing. In European Conference on Computer Vision. Springer, 646--662.Google ScholarDigital Library
- Huanjing Yue, Yijia Cheng, Fanglong Liu, and Jingyu Yang. 2021. Unsupervised moiré pattern removal for recaptured screen images. Neurocomputing, Vol. 456 (2021), 352--363.Google ScholarDigital Library
- Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Harry Shum. 2022. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. In International Conference on Learning Representations.Google Scholar
- Quan Zhang, Yifeng Ding, Yongqiang Tian, Jianmin Guo, Min Yuan, and Yu Jiang. 2021. AdvDoor: adversarial backdoor attack of deep learning system. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis. 127--138.Google ScholarDigital Library
- Song-Hai Zhang, Ruilong Li, Xin Dong, Paul Rosin, Zixi Cai, Xi Han, Dingcheng Yang, Haozhi Huang, and Shi-Min Hu. 2019. Pose2seg: Detection free human instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 889--898.Google ScholarCross Ref
- Zhendong Zhao, Xiaojun Chen, Yuexin Xuan, Ye Dong, Dakui Wang, and Kaitai Liang. 2022. DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15213--15222.Google ScholarCross Ref
- Haoti Zhong, Cong Liao, Anna Cinzia Squicciarini, Sencun Zhu, and David Miller. 2020. Backdoor embedding in convolutional neural network models via invisible perturbation. In Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy. 97--108.Google ScholarDigital Library
Index Terms
- Moiré Backdoor Attack (MBA): A Novel Trigger for Pedestrian Detectors in the Physical World
Recommendations
Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features
CCS '20: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications SecurityWith the prevalent use of Deep Neural Networks (DNNs) in many applications, security of these networks is of importance. Pre-trained DNNs may contain backdoors that are injected through poisoned training. These trojaned models perform well when regular ...
AdvDoor: adversarial backdoor attack of deep learning system
ISSTA 2021: Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and AnalysisDeep Learning (DL) system has been widely used in many critical applications, such as autonomous vehicles and unmanned aerial vehicles. However, their security is threatened by backdoor attack, which is achieved by adding artificial patterns on specific ...
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Computer Vision – ECCV 2020AbstractRecent studies have shown that DNNs can be compromised by backdoor attacks crafted at training time. A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data. At test ...
Comments