Skip to main content

Advertisement

Log in

Deep learning applied to humanoid soccer robotics: playing without using any color information

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

The goal of this paper is to describe a vision system for humanoid robot soccer players that does not use any color information, and whose object detectors are based on the use of convolutional neural networks. The main features of this system are the following: (i) real-time operation in computationally constrained humanoid robots, and (ii) the ability to detect the ball, the pose of the robot players, as well as the goals, lines and other key field features robustly. The proposed vision system is validated in the RoboCup Standard Platform League, where humanoid NAO robots are used. Tests are carried out under realistic and highly demanding game conditions, where very high performance is obtained: a robot detection accuracy of 94.90%, a ball detection accuracy of 97.10%, and a correct determination of the robot orientation 99.88% of the times when the observed robot is static, and 95.52% when the robot is moving.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

References

  • Albani, D., Youssef, A., Suriani, V., Nardi, D., & Bloisi, D. D. (2017). A deep learning approach for object recognition with NAO soccer robots. In S. Behnke, R. Sheh, S. Sarıel, & D. D. Lee (Eds.), RoboCup 2016: Robot World Cup XX (pp. 392–403). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Andrew, A. (1979). Another efficient algorithm for convex hulls in two dimensions. Information Processing Letters, 9(5), 216–219. https://doi.org/10.1016/0020-0190(79)90072-3.

    Article  MATH  Google Scholar 

  • Cruz, N., Lobos-Tsunekawa, K., & Ruiz-del Solar, J. (2018). Using convolutional neural networks in robots with limited computational resources: Detecting NAO robots while playing soccer. In H. Akiyama, O. Obst, C. Sammut, & F. Tonidandel (Eds.), RoboCup 2017: Robot World Cup XXI (pp. 19–30). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Cruz, N., & Ruiz-del Solar, J. (2020). Closing the simulation-to-reality gap using generative neural networks: Training object detectors for soccer robotics in simulation as a case study. In The international joint conference on neural networks 2020

  • Felbinger, G. C., Göttsch, P., Loth, P., Peters, L., & Wege, F. (2019). Designing convolutional neural networks using a genetic approach for ball detection. In D. Holz, K. Genter, M. Saad, & O. von Stryk (Eds.), RoboCup 2018: Robot World Cup XXII (pp. 150–161). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381395. https://doi.org/10.1145/358669.358692.

    Article  MathSciNet  Google Scholar 

  • Gabel, A., Heuer, T., Schiering, I., & Gerndt, R. (2019). Jetson, where is the ball? Using neural networks for ball detection at robocup 2017. In D. Holz, K. Genter, M. Saad, & O. von Stryk (Eds.), RoboCup 2018: Robot World Cup XXII (pp. 181–192). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Gade, R., & Moeslund, T. B. (2018). Constrained multi-target tracking for team sports activities. IPSJ Transactions on Computer Vision and Applications, 10(1), 2.

    Article  Google Scholar 

  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., et al. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. 1704.04861

  • HTWK, N. T. (2018). Nao-team htwk: Team research report. Retrieved from 30 January, 2020 http://www.htwk-robots.de/documents/TRR_2017.pdf?lang=en.

  • Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5mb model size. 1602.07360.

  • Javadi, M., Azar, S. M., Azami, S., Ghidary, S. S., Sadeghnejad, S., & Baltes, J. (2018). Humanoid robot detection using deep learning: A speed-accuracy tradeoff. In H. Akiyama, O. Obst, C. Sammut, & F. Tonidandel (Eds.), RoboCup 2017: Robot World Cup XXI (pp. 338–349). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Kukleva, A., Khan, M. A., Farazi, H., & Behnke, S. (2019). Utilizing temporal information in deep convolutional network for efficient soccer ball detection and tracking. In S. Chalup, T. Niemueller, J. Suthakorn, & M. A. Williams (Eds.), RoboCup 2019: Robot World Cup XXIII (pp. 112–125). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Leiva, F., Cruz, N., Bugueño, I., & Ruiz-del Solar, J. (2019). Playing soccer without colors in the spl: A convolutional neural network approach. In D. Holz, K. Genter, M. Saad, & O. von Stryk (Eds.), RoboCup 2018: Robot World Cup XXII (pp. 122–134). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94.

    Article  Google Scholar 

  • Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities improve neural network acoustic models. In ICML workshop on deep learning for audio, speech and language processing.

  • Menashe, J., Kelle, J., Genter, K., Hanna, J., Liebman, E., Narvekar, S., et al. (2018). Fast and precise black and white ball detection for robocup soccer. In H. Akiyama, O. Obst, C. Sammut, & F. Tonidandel (Eds.), RoboCup 2017: Robot World Cup XXI (pp. 45–58). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Mühlenbrock, A., & Laue, T. (2018). Vision-based orientation detection of humanoid soccer robots. In H. Akiyama, O. Obst, C. Sammut, & F. Tonidandel (Eds.), RoboCup 2017: Robot World Cup XXI (pp. 204–215). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Müller, J., Frese, U., & Röfer, T. (2012). Grab a mug—object detection and grasp motion planning with the nao robot. In: 2012 12th IEEE-RAS international conference on humanoid robots (Humanoids 2012) (pp. 349–356). https://doi.org/10.1109/HUMANOIDS.2012.6651543.

  • Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62–66. https://doi.org/10.1109/TSMC.1979.4310076.

    Article  Google Scholar 

  • Poppinga, B., & Laue, T. (2019). Jet-net: Real-time object detection for mobile robots. In S. Chalup, T. Niemueller, J. Suthakorn, & M. A. Williams (Eds.), RoboCup 2019: Robot World Cup XXIII (pp. 227–240). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Redmon, J. (2013–2016). Darknet: Open source neural networks in C. Retrieved from 28 January, 2021. http://pjreddie.com/darknet/.

  • Redmon, J., & Farhadi, A. (2017). Yolo9000: Better, faster, stronger. In The IEEE conference on computer vision and pattern recognition (CVPR).

  • RoboCup (2020a) Robocup federation official website. Retrieved from 30 January, 2020. https://www.robocup.org/objective/.

  • RoboCup (2020b) Robocup standard platform league official website. Retrieved from 30 January, 2020. https://spl.robocup.org/.

  • Röfer, T., Laue, T., Baude, A., Blumenkamp, J., Felsch, G., Fiedler, J., et al. (2019). B-Human team report and code release 2019. http://www.b-human.de/downloads/publications/2019/CodeRelease2019.pdf.

  • Röfer, T., Laue, T., Bülter, Y., Krause, D., Kuball, J., Mühlenbrock, A., Poppinga, B., et al. (2017). B-Human team report and code release 2017. Retrieved from 28 January, 2021. http://www.b-human.de/downloads/publications/2017/coderelease2017.pdf

  • Speck, D., Barros, P., Weber, C., & Wermter, S. (2017). Ball localization for robocup soccer using convolutional neural networks. In S. Behnke, R. Sheh, S. Sarıel, & D. D. Lee (Eds.), RoboCup 2016: Robot World Cup XX (pp. 19–30). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Speck, D., Bestmann, M., & Barros, P. (2019). Towards real-time ball localization using CNNs. In D. Holz, K. Genter, M. Saad, & O. von Stryk (Eds.), RoboCup 2018: Robot World Cup XXII (pp. 337–348). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Szemenyei, M., & Estivill-Castro, V. (2019a). Real-time scene understanding using deep neural networks for robocup spl. In D. Holz, K. Genter, M. Saad, & O. von Stryk (Eds.), RoboCup 2018: Robot World Cup XXII (pp. 96–108). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Szemenyei, M., & Estivill-Castro, V. (2019b). Robo: Robust, fully neural object detection for robot soccer. In S. Chalup, T. Niemueller, J. Suthakorn, & M. A. Williams (Eds.), RoboCup 2019: Robot World Cup XXIII (pp. 309–322). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Teimouri, M., Delavaran, M. H., & Rezaei, M. (2019). A real-time ball detection approach using convolutional neural networks. In S. Chalup, T. Niemueller, J. Suthakorn, & M. A. Williams (Eds.), RoboCup 2019: Robot World Cup XXIII (pp. 323–336). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001 (vol 1, pp. I–I). https://doi.org/10.1109/CVPR.2001.990517

Download references

Acknowledgements

The authors thank Kenzo Lobos-Tsunekawa for his contributions on the development of the robot detection system. We also thank Ignacio Bugueño for implementing the detection of the major and minor lines, which are used to estimate the precise rotation of the robot. Additionally, we thank him for helping to generate some of the databases used on this paper. This work was partially funded by ANID (Chile) Projects FONDECYT 1201170, PIA AFB 180004, and CONICYT-PFCHA/Magíster Nacional/2018-22182130.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javier Ruiz-del-Solar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cruz, N., Leiva, F. & Ruiz-del-Solar, J. Deep learning applied to humanoid soccer robotics: playing without using any color information. Auton Robot 45, 335–350 (2021). https://doi.org/10.1007/s10514-021-09966-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-021-09966-9

Keywords