Abstract
This work presents a novel approach to the application of adversarial attacks to the domain of video games, specifically, the exploitation of computer vision-based aim assist tools in the first-person shooter genre. As one of the greatest issues plaguing modern shooters, aim assist (also referred to as aimbots) greatly increase the speed and accuracy of cheating players, giving them an unfair advantage over their competitors. The latest versions of these aim assisting tools make use of object detection models such as YOLO (You Only Look Once); fortunately, these models are vulnerable to attack via small perturbations to their input space which results in the misclassification of objects. The purpose of this work is to formulate an attack on a black-box object detection model which can be feasibly implemented in a commercial game environment. What makes our solution unique is the generation of attack images in the form of in-game objects rendered by the game engine itself, instead of a set of screenshots or from a generic differentiable renderer. Results show that our approach is capable of generating adversarial examples which can fool an object detection model in a black-box environment, as well as recreating the game’s original textures such that these perturbations go unnoticed by players.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR 2015 International Conference On Learning Representations (2015)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference On Computer Vision And Pattern Recognition (CVPR), pp. 779–788 (2016)
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Huang, S., Papernot, N., Goodfellow, I., Duan, Y., Abbeel, P.: Adversarial attacks on neural network policies. ArXiv Preprint ArXiv:1702.02284 (2017)
Carlini, N., Wagner, D. Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
Carlini, N., Katz, G., Barrett, C., Dill, D.: Provably minimally-distorted adversarial examples. ArXiv Preprint ArXiv:1709.10207 (2017)
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293 (2018)
Salimans, T., Ho, J., Chen, X., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. ArXiv Preprint ArXiv:1703.03864 (2017)
Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. ArXiv Preprint ArXiv:1605.07277 (2016)
Jonnalagadda, A., Frosio, I., Schneider, S., McGuire, M., Kim, J.: Robust Vision-Based Cheat Detection in Competitive Gaming. Proc. ACM Comput. Graphics Interact. Tech. 4, 1–18 (2021)
Yao, P., So, A., Chen, T., Ji, H.: On multiview robustness of 3D adversarial attacks. Practice and Experience In Advanced Research Computing, pp. 372–378 (2020)
Ultralytics YOLOv5. (2021), https://github.com/ultralytics/yolov5
Quigley, K.: Reduce Cheating and Hacking up to 90% with Cyrex GST (2020), https://cyrextech.net/cyrex-gst-cloud-service-stops-90-of-hackers-and-cheaters/
Pinto, S.: Cloud-Based Gaming: A Game Changer For Security? (2021), https://blog.anybrain.gg/cloud-based-gaming-a-game-changer-for-security-70dd0f831869
Chronicle, D.: New cloud-based solution launched to counter cheating in eSports (2019), https://www.deccanchronicle.com/technology/in-other-news/280919/new-cloud-based-solution-launched-to-counter-cheating-in-esports.html
Foundation, T.: Open Neural Network Exchange (2019), https://onnx.ai/
Software, U.: Unity Package Manuals: Introduction to Barracuda (2020), https://docs.unity3d.com/Packages/com.unity.barracuda@1.0/manual/index.html
Software, Unity Real0Time Development Platform (2023), https://unity.com
Lin, T., et al.: Microsoft COCO: common objects in context. In: European Conference On Computer Vision, pp. 740–755 (2014)
Qiu, H., Custode, L., Iacca, G.: Black-box adversarial attacks using evolution strategies. In: Proceedings of The Genetic and Evolutionary Computation Conference Companion, pp. 1827–1833 (2021)
Cunha, P.: Pine (2021), https://github.com/petercunha/Pine
Jonnalagadda, A., Frosio, I., Schneider, S., McGuire, M., Kim, J.: Building towards invisible cloak: robust physical adversarial attack on yolo object detector. In: 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), pp. 368–374 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 IFIP International Federation for Information Processing
About this paper
Cite this paper
Babin, M., Katchabaw, M. (2023). Combating Computer Vision-Based Aim Assist Tools in Competitive Online Games. In: Ciancarini, P., Di Iorio, A., Hlavacs, H., Poggi, F. (eds) Entertainment Computing – ICEC 2023. ICEC 2023. Lecture Notes in Computer Science, vol 14455. Springer, Singapore. https://doi.org/10.1007/978-981-99-8248-6_24
Download citation
DOI: https://doi.org/10.1007/978-981-99-8248-6_24
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8247-9
Online ISBN: 978-981-99-8248-6
eBook Packages: Computer ScienceComputer Science (R0)