Abstract
This work presents a study for building a Deep Vision pipeline suitable for the Robocup Standard Platform League, a humanoid robot soccer tournament. Specifically, we focus on end-to-end trainable object detection for effective perception using Aldebaran NAO v6 robots. The implementation of such a detector poses two major challenges, those of speed, and resource-effectiveness with respect to memory and computational power. We benchmark architectures using the YOLO and SSD detection paradigms, and identify variants that are able to achieve good detection performance for ball detection, while being able to perform rapid inference. To add to the training data for these networks, we also create a dataset from logs collected by the UT Austin Villa team during previous competitions, and set up an annotation pipeline for training. We utilize the above results and training pipeline to realize a practical, multi-class object detector that enables the robot’s vision system to run 35 Hz while maintaining good detection performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software available from https://www.tensorflow.org/
Achim, S., Stone, P., Veloso, M.: Building a dedicated robotic soccer system. In: Proceedings of the IROS-96 Workshop on RoboCup, pp. 41–48 (1996)
Adarsh, P., Rathi, P., Kumar, M.: YOLO v3-tiny: object detection and recognition using one stage improved model. In: 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 687–694 (2020)
Broemmel, P., et al.: RoboCup SPL instance segmentation dataset (2019). https://www.kaggle.com/pietbroemmel/naodevils-segmentation-upper-camera
Alami, R., Biswas, J., Cakmak, M., Obst, O. (eds.): RoboCup 2021: Robot World Cup XXIV. Springer, Cham (2022). ISBN 978-3-030-98681-0
Barry, D., Shah, M., Keijsers, M., Khan, H., Hopman, B.: XYOLO: a model for real-time object detection in humanoid soccer on low-end hardware. In: 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1–6 (2019)
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)
Brameld, K., et al.: RoboCup SPL 2018 rUNSWift team paper (2019)
Cai, H., Gan, C., Han, S.: Once for all: train one network and specialize it for efficient deployment. CoRR abs/1908.09791 (2019)
Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S.: Once-for-all: train one network and specialize it for efficient deployment (2019)
Cruz, N., Lobos-Tsunekawa, K., Ruiz-del-Solar, J.: Using convolutional neural networks in robots with limited computational resources: detecting NAO robots while playing soccer. CoRR abs/1706.06702 (2017)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)
He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017)
Hermann, T.: Frugally-deep: header-only library for using Keras models in C++. RoboCup (2019)
Hess, T., Mundt, M., Weis, T., Ramesh, V.: Large-scale stochastic scene generation and semantic annotation for deep convolutional neural network training in the RoboCup SPL. In: Akiyama, H., Obst, O., Sammut, C., Tonidandel, F. (eds.) RoboCup 2017. LNCS (LNAI), vol. 11175, pp. 33–44. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00308-1_3 ISBN 978-3-030-00308-1
Howard, A., et al.: Searching for MobileNetV3. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1314–1324 (2019)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. CoRR abs/1704.04861 (2017)
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size (2016)
Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., Osawa, E.: RoboCup: the robot world cup initiative. In: Proceedings of the First International Conference on Autonomous Agents, AGENTS 1997, pp. 340–347. Association for Computing Machinery, New York (1997). https://doi.org/10.1145/267658.267738. ISBN 0897918770
Law, H., Deng, J.: CornerNet: detecting objects as paired keypoints. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Lee, J., et al.: XNNPACK (2019). https://github.com/google/XNNPACK
Lin, T., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection. CoRR abs/1708.02002 (2017)
Lin, T.Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48 ISBN 978-3-319-10602-1
Liu, W., et al.: SSD: single shot multibox detector. CoRR abs/1512.02325 (2015)
Menashe, J., et al.: Fast and precise black and white ball detection for RoboCup soccer. In: Akiyama, H., Obst, O., Sammut, C., Tonidandel, F. (eds.) RoboCup 2017. LNCS (LNAI), vol. 11175, pp. 45–58. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00308-1_4
OmniVision Technologies: Ov5640: color CMOS QSXGA (5 megapixel) image sensor with OMNIBSI(TM) technology (2011)
Poppinga, B., Laue, T.: JET-Net: real-time object detection for mobile robots. In: Chalup, S., Niemueller, T., Suthakorn, J., Williams, M.-A. (eds.) RoboCup 2019. LNCS (LNAI), vol. 11531, pp. 227–240. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35699-6_18
Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: imagenet classification using binary convolutional neural networks. CoRR abs/1603.05279 (2016)
Redmon, J.: Darknet: open source neural networks in C (2013–2016). http://pjreddie.com/darknet/
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)
Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)
Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. CoRR abs/1804.02767 (2018)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 91–99. Curran Associates, Inc. (2015)
Rofer, T., Laue, T., Baude, A.: Team report and code release 2019 (2019)
Röfer, T., et al.: B-Human team report and code release 2018 (2018). http://www.b-human.de/downloads/publications/2018/CodeRelease2018.pdf
Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.: Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation. CoRR abs/1801.04381 (2018)
Sekachev, B., Zhavoronkov, A., Manovich, N.: Computer vision annotation tool: a universal approach to data annotation (2019). https://cvat.org
Softbank Robotics: Nao 6 datasheet (2018). https://www.generationrobots.com/media/Specifications_NAO6.pdf
Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. CoRR abs/1911.09070 (2019)
Thielke, F., Hasselbring, A.: A JIT compiler for neural network inference. In: Chalup, S., Niemueller, T., Suthakorn, J., Williams, M.-A. (eds.) RoboCup 2019. LNCS (LNAI), vol. 11531, pp. 448–456. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35699-6_36 ISBN 978-3-030-35699-6
Ultralytics: YOLOv5 rocket in PyTorch (2020). https://zenodo.org/badge/latestdoi/264818686
Yao, Z.B., Douglas, W., O’Keeffe, S., Villing, R.: Faster YOLO-LITE: faster object detection on robot and edge devices. In: Alami, R., Biswas, J., Cakmak, M., Obst, O. (eds.) RoboCup 2021. LNCS (LNAI), vol. 13132, pp. 226–237. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98682-7_19 ISBN 978-3-030-98682-7
Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. CoRR abs/1707.07012 (2017)
Acknowledgements
The authors thank Juhyun Lee, Terry Heo and others at Google for their invaluable help with setting up and understanding the best practices in using TensorFlow Lite. This work has taken place in the Learning Agents Research Group (LARG) at the Department of Computer Science, The University of Texas at Austin. LARG research is supported in part by the National Science Foundation (CPS-1739964, IIS-1724157, FAIN-2019844), the Office of Naval Research (N00014-18-2243), Army Research Office (W911NF-19-2-0333), DARPA, Lockheed Martin, General Motors, Bosch, and Good Systems, a research grand challenge at the University of Texas at Austin. The views and conclusions contained in this document are those of the authors alone. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Narayanaswami, S.K. et al. (2023). Towards a Real-Time, Low-Resource, End-to-End Object Detection Pipeline for Robot Soccer. In: Eguchi, A., Lau, N., Paetzel-Prüsmann, M., Wanichanon, T. (eds) RoboCup 2022: Robot World Cup XXV. RoboCup 2022. Lecture Notes in Computer Science(), vol 13561. Springer, Cham. https://doi.org/10.1007/978-3-031-28469-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-28469-4_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-28468-7
Online ISBN: 978-3-031-28469-4
eBook Packages: Computer ScienceComputer Science (R0)