Skip to main content

Synthetically Labeled Images for Maize Plant Detection in UAS Images

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2023)

Abstract

The detection of individual plants within field images is critical for many applications in precision agriculture and research. Computer vision models for object detection, while often highly accurate, require large amounts of labeled data for training, something that is not readily available for most plants. To address the challenge of creating large datasets with accurate labels, we used indoor images of maize plants to create synthetic field images with automatically derived bounding box labels, enabling the generation of thousands of synthetic images without any manual labeling. Training an object detection model (Faster R-CNN) exclusively on synthetic images led to a mean average precision (mAP) value of 0.533 when the model was evaluated on pre-processed real plot images. When fine-tuned with a small number of real plot images, the model pre-trained on the synthetic images (mAP = 0.884) outperformed the model that was not pre-trained.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abbas, A., Jain, S., Gour, M., Vankudothu, S.: Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 187, 106279 (2021)

    Article  Google Scholar 

  2. Amit, Y., Felzenszwalb, P., Girshick, R.: Object detection. In: Computer Vision: A Reference Guide, pp. 1–9 (2020)

    Google Scholar 

  3. Bai, Y.: A fast and robust method for plant count in sunflower and maize at different seedling stages using high-resolution UAV RGB imagery. Precision Agric. (2022). https://doi.org/10.1007/s11119-022-09907-1, mAG ID: 4281983142

  4. Brach, M., Chan, J.W., Szymanski, P.: Accuracy assessment of different photogrammetric software for processing data from low-cost UAV platforms in forest conditions. iForest-Biogeosciences For. 12(5), 435 (2019)

    Google Scholar 

  5. Calvario, G., Alarcón, T.E., Dalmau, O., Sierra, B., Hernandez, C.: An agave counting methodology based on mathematical morphology and images acquired through unmanned aerial vehicles. Sensors 20(21), 6247 (2020)

    Article  Google Scholar 

  6. Chen, J., Wang, W., Zhang, D., Zeb, A., Nanehkaran, Y.A.: Attention embedded lightweight network for maize disease recognition. Plant. Pathol. 70(3), 630–642 (2021)

    Article  Google Scholar 

  7. Cieslak, M., et al.: L-system models for image-based phenomics: case studies of maize and canola. Silico Plants 4(1), diab039 (2022). https://doi.org/10.1093/insilicoplants/diab039

  8. David, E., et al.: Plant detection and counting from high-resolution RGB images acquired from UAVs: comparison between deep-learning and handcrafted methods with application to maize, sugar beet, and sunflower (2022). https://doi.org/10.1101/2021.04.27.441631

  9. Gilliot, J.M., Michelin, J., Hadjard, D., Houot, S.: An accurate method for predicting spatial variability of maize yield from UAV-based plant height estimation: A tool for monitoring agronomic field experiments. Precision Agric. 22(3), 897–921 (2021)

    Article  Google Scholar 

  10. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, Prentice Hall. Upper Saddle River, NJ (2008)

    Google Scholar 

  11. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  12. Han, K., et al.: A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 87–110 (2022)

    Article  MathSciNet  Google Scholar 

  13. Pathak, H., Igathinathane, C., Zhang, Z., Archer, D., Hendrickson, J.: A review of unmanned aerial vehicle-based methods for plant stand count evaluation in row crops. Comput. Electron. Agric. 198, 107064–107064 (2022). https://doi.org/10.1016/j.compag.2022.107064, mAG ID: 4281551313

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  15. Jocher, G., et al.: ultralytics/yolov5: v6. 1-tensorrt, tensorflow edge TPU and openvino export and inference. Zenodo (2022)

    Google Scholar 

  16. Klein, J., Waller, R.E., Pirk, S., Palubicki, W., Tester, M., Michels, D.: Synthetic Data at Scale: A Paradigm to Efficiently Leverage Machine Learning in Agriculture (2023)

    Google Scholar 

  17. Krosney, A.E., Sotoodeh, P., Henry, C.J., Beck, M.A., Bidinosti, C.P.: Inside out: transforming images of lab-grown plants for machine learning applications in agriculture (2022). http://arxiv.org/abs/2211.02972, arXiv:2211.02972 [cs]

  18. Kuznichov, D., Zvirin, A., Honen, Y., Kimmel, R.: Data augmentation for leaf segmentation and counting tasks in rosette plants (2019)

    Google Scholar 

  19. Li, H., Wang, P., Huang, C.: Comparison of deep learning methods for detecting and counting sorghum heads in UAV imagery. Remote Sens. 14(13), 3143–3143 (2022). https://doi.org/10.3390/rs14133143, mAG ID: 4283765418

  20. Liang, Z., Schnable, J.: Maize diversity phenotype map. CyVerse Data Commons (2017)

    Google Scholar 

  21. Liang, Z., et al.: Conventional and hyperspectral time-series imaging of maize lines widely used in field trials. Gigascience 7(2), gix117 (2018)

    Google Scholar 

  22. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

    Google Scholar 

  23. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  24. Miao, C., et al.: Simulated plant images improve maize leaf counting accuracy. BioRxiv, 706994 (2019)

    Google Scholar 

  25. de Oliveira Dias, F., Magalhães Valente, D.S., Oliveira, C.T., Dariva, F.D., Copati, M.G.F., Nick, C.: Remote sensing and machine learning techniques for high throughput phenotyping of late blight-resistant tomato plants in open field trials. Int. J. Remote Sens. 44(6), 1900–1921 (2023)

    Google Scholar 

  26. OpenDroneMap Authors: ODM - a command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images (2020). https://github.com/OpenDroneMap/ODM

  27. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  28. Gomes, D.P.S., Zheng, L.: Recent data augmentation strategies for deep learning in plant phenotyping and their significance. In: 2020 Digital Image Computing: Techniques and Applications (DICTA), pp. 1–8 (2020). https://doi.org/10.1109/DICTA51227.2020.9363383

  29. Prusinkiewicz, P., Cieslak, M., Ferraro, P., Hanan, J.: Modeling plant development with L-systems. In: Morris, R.J. (ed.) Mathematical Modelling in Plant Biology, pp. 139–169. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99070-5_8

    Chapter  Google Scholar 

  30. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  31. Tetila, E.C., et al.: Detection and classification of soybean pests using deep learning with UAV images. Comput. Electron. Agric. 179, 105836 (2020)

    Article  Google Scholar 

  32. Tirado, S.B., Hirsch, C.N., Springer, N.M.: UAV-based imaging platform for monitoring maize growth throughout development. Plant Direct 4(6), e00230 (2020)

    Article  Google Scholar 

  33. Ubbens, J., Cieslak, M., Prusinkiewicz, P., Stavness, I.: The use of plant models in deep learning: an application to leaf counting in rosette plants. Plant Meth. 14, 1–10 (2018)

    Article  Google Scholar 

  34. Velumani, K., et al.: Estimates of maize plant density from UAV RGB Images using faster-RCNN detection model: impact of the spatial resolution. Plant Phenomics 2021, 2021/9824843 (2021). https://doi.org/10.34133/2021/9824843

  35. Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. J. Big Data 3(1), 1–40 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported in part by an appointment to the Agricultural Research Service (ARS) Research Participation Program administered by the Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy (DOE) and the U.S. Department of Agriculture (USDA). ORISE is managed by ORAU under DOE contract number DE-SC0014664. Funding was provided by the United States Department of Agriculture, Agricultural Research Service, SCINet Postdoctoral Fellows Program. All opinions expressed in this publication are the author’s and do not necessarily reflect the policies and views of USDA, DOE, or ORAU/ORISE.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacob D. Washburn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pandey, P., Best, N.B., Washburn, J.D. (2023). Synthetically Labeled Images for Maize Plant Detection in UAS Images. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14361. Springer, Cham. https://doi.org/10.1007/978-3-031-47969-4_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47969-4_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47968-7

  • Online ISBN: 978-3-031-47969-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics