Skip to main content

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

  • Conference paper
  • First Online:
ROBOT2022: Fifth Iberian Robotics Conference (ROBOT 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 589))

Included in the following conference series:

Abstract

To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7 ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/ultralytics/yolov5.

  2. 2.

    https://gopro.com/en/gb/update/hero6.

  3. 3.

    https://www.flir.eu/products/m232.

  4. 4.

    https://www.stereolabs.com/zed.

  5. 5.

    https://www.alliedvision.com/en/camera-selector/detail/mako/G-125.

  6. 6.

    https://store.opencv.ai/products/oak-d.

  7. 7.

    https://github.com/openvinotoolkit/cvat.

  8. 8.

    https://github.com/tensorflow/models/tree/master/research/object_detection.

  9. 9.

    https://pytorch.org/.

  10. 10.

    https://github.com/AlexeyAB/darknet.

  11. 11.

    https://coral.ai/products/accelerator.

  12. 12.

    https://developer.nvidia.com/cuda-gpus.

References

  1. da Silva, D.Q., dos Santos, F.N., Sousa, A.J., Filipe, V., Boaventura-Cunha, J.: Unimodal and multimodal perception for forest management: review and dataset. Computation 9, 127 (2021)

    Article  Google Scholar 

  2. Tianyang, D., Jian, Z., Sibin, G., Ying, S., Jing, F.: Single-tree detection in high-resolution remote-sensing images based on a cascade neural network. ISPRS Int. J. Geo-Inf. 7, 367 (2018)

    Article  Google Scholar 

  3. Hirschmugl, M., Ofner, M., Raggam, J., Schardt, M.: Single tree detection in very high resolution remote sensing data. Remote Sens. Environ. 110, 533–544 (2007)

    Article  Google Scholar 

  4. Ali, W., Georgsson, F., Hellstrom, T.: Visual tree detection for autonomous navigation in forest environment. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June, pp. 560–565 (2008)

    Google Scholar 

  5. Inoue, K., Kaizu, Y., Igarashi, S., Imou, K.: The development of autonomous navigation and obstacle avoidance for a robotic mower using machine vision technique. IFAC-PapersOnLine 52, 173–177 (2019)

    Article  Google Scholar 

  6. Zhilenkov, A.A., Epifantsev, I.R.: System of autonomous navigation of the drone in difficult conditions of the forest trails. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Moscow and St. Petersburg, Russia, 29 January-1 February, pp. 1036–1039 (2018)

    Google Scholar 

  7. Mannar, S., Thummalapeta, M., Saksena, S.K., Omkar, S.: Vision-based control for aerial obstacle avoidance in forest environments. IFAC-PapersOnLine 51, 480–485 (2018)

    Article  Google Scholar 

  8. Dionisio-Ortega, S., Rojas-Perez, L.O., Martinez-Carranza, J., Cruz-Vega, I.: A deep learning approach towards autonomous flight in forest environments. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February, pp. 139–144 (2018)

    Google Scholar 

  9. Itakura, K., Hosoi, F.: Automatic tree detection from three-dimensional images reconstructed from 360\(^\circ \) spherical camera using YOLO v2. Remote Sens. 12, 988 (2020)

    Article  Google Scholar 

  10. Xie, Q., Li, D., Yu, Z., Zhou, J., Wang, J.: Detecting trees in street images via deep learning with attention module. IEEE Trans. Instrum. Meas. 69, 5395–5406 (2020)

    Article  Google Scholar 

  11. da Silva, D.Q., dos Santos, F.N., Sousa, A.J., Filipe, V.: Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics. J. Imaging 7, 176 (2021)

    Article  Google Scholar 

  12. Li, S., Lideskog, H.: Implementation of a system for real-time detection and localization of terrain objects on harvested forest land. Forests 12, 1142 (2021)

    Article  Google Scholar 

  13. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  14. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. ArXiv: https://arxiv.org/abs/1704.04861 (2017)

  15. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA 18–23 June, 4510–4520 (2018)

    Google Scholar 

  16. Howard, A., et al.: Searching for mobileNetV3. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea (South), 27 October-2 November, pp. 1314–1324 (2019)

    Google Scholar 

  17. Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 14–19 June (2020)

    Google Scholar 

  18. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA 27–30 June, pp. 779–788 (2016)

    Google Scholar 

  19. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. ArXiv: https://arxiv.org/abs/2004.10934 (2020)

  20. Wang, C.Y., Yeh, I.H., Liao, H.Y.M.: You only learn one representation: unified network for multiple Tasks. ArXiv: https://arxiv.org/abs/2105.04206 (2021)

  21. Everingham, M., Gool, L.V., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 8, 303–338 (2010)

    Article  Google Scholar 

Download references

Acknowledgement

This work is financed by the ERDF - European Regional Development Fund, through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme under the Portugal 2020 Partnership Agreement, within project SMARTCUT, with reference POCI-01-0247-FEDER-048183. Daniel Queirós da Silva thanks the FCT-Foundation for Science and Technology, Portugal for the Ph.D. Grant UI/BD/152564/2022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Queirós da Silva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

da Silva, D.Q., dos Santos, F.N., Filipe, V., Sousa, A.J. (2023). Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations. In: Tardioli, D., Matellán, V., Heredia, G., Silva, M.F., Marques, L. (eds) ROBOT2022: Fifth Iberian Robotics Conference. ROBOT 2022. Lecture Notes in Networks and Systems, vol 589. Springer, Cham. https://doi.org/10.1007/978-3-031-21065-5_4

Download citation

Publish with us

Policies and ethics