Skip to main content

Towards Reliable and Efficient Vegetation Segmentation for Australian Wheat Data Analysis

  • Conference paper
  • First Online:
Databases Theory and Applications (ADC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14386))

Included in the following conference series:

  • 272 Accesses

Abstract

Automated crop data analysis plays an important role in modern Australian agriculture. As one of the key procedures of analysis, vegetation segmentation, which aims to predict pixel-level labels of vegetation images, has recently demonstrated advanced results on benchmark datasets. However, the promising results are built upon the assumption that the test data and the training data always follow an identical distribution. Due to the differences in vegetation species, country, or illumination conditions, such assumptions are commonly violated in the real-world scenario. As a pilot study, this work confirms the model pre-trained on worldwide vegetation data has a degradation issue when being applied to the Australian wheat data. Instead of conducting expensive pixel-level annotation of Australian wheat data, we propose a self-training strategy that incorporates confidence estimated pseudo-labeling of the wheat data in the training process to close the distribution gap. Meanwhile, to reduce the computational cost, we equip the lightweight transformer framework with a token clustering and reconstruction module. Extensive experimental results demonstrate that the proposed network can achieve 6.4% higher mIOU and 8.6% lower computational costs over the baseline methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anand, T., Sinha, S., Mandal, M., Chamola, V., Yu, F.R.: AgriSegNet: deep aerial semantic segmentation framework for IoT-assisted precision agriculture. IEEE Sens. J. 21(16), 17581–17590 (2021)

    Article  Google Scholar 

  2. Bhojanapalli, S., Chakrabarti, A., Glasner, D., Li, D., Unterthiner, T., Veit, A.: Understanding robustness of transformers for image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10231–10241 (2021)

    Google Scholar 

  3. Chavan, A., Shen, Z., Liu, Z., Liu, Z., Cheng, K.T., Xing, E.P.: Vision transformer slimming: multi-dimension searching in continuous optimization space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4931–4941 (2022)

    Google Scholar 

  4. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  5. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

    Chapter  Google Scholar 

  6. Chen, M., Peng, H., Fu, J., Ling, H.: AutoFormer: searching transformers for visual recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12270–12280 (2021)

    Google Scholar 

  7. Dosovitskiy, A., et al.: An image is worth \(16\times 16\) words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  8. Fawakherji, M., Youssef, A., Bloisi, D., Pretto, A., Nardi, D.: Crop and weeds classification for precision agriculture using context-independent pixel-wise segmentation. In: 2019 Third IEEE International Conference on Robotic Computing (IRC), pp. 146–152. IEEE (2019)

    Google Scholar 

  9. Fu, J., et al.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019)

    Google Scholar 

  10. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2030–2096 (2016)

    MathSciNet  Google Scholar 

  11. Goncalves, D.N., et al.: MTLSegFormer: multi-task learning with transformers for semantic segmentation in precision agriculture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6289–6297 (2023)

    Google Scholar 

  12. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  13. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  14. Guo, Y., Liu, Y., Georgiou, T., Lew, M.S.: A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retrieval 7, 87–93 (2018). https://doi.org/10.1007/s13735-017-0141-z

    Article  Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Hendrycks, D., et al.: The many faces of robustness: a critical analysis of out-of-distribution generalization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8340–8349 (2021)

    Google Scholar 

  17. Hoffman, J., Wang, D., Yu, F., Darrell, T.: FCNs in the wild: pixel-level adversarial and constraint-based adaptation (2016)

    Google Scholar 

  18. Hou, Z., Kung, S.Y.: Multi-dimensional model compression of vision transformer. In: 2022 IEEE International Conference on Multimedia and Expo (ICME), pp. 01–06. IEEE (2022)

    Google Scholar 

  19. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  20. Hoyer, L., Dai, D., Van Gool, L.: DAFormer: improving network architectures and training strategies for domain-adaptive semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9924–9935 (2022)

    Google Scholar 

  21. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  22. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  23. Jampani, V., Sun, D., Liu, M.-Y., Yang, M.-H., Kautz, J.: Superpixel sampling networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 363–380. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_22

    Chapter  Google Scholar 

  24. Korts, J.R., et al.: INVITA and AGFEML-monitoring and extending the value of NVT trials

    Google Scholar 

  25. Kouw, W.M., Loog, M.: An introduction to domain adaptation and transfer learning. arXiv preprint arXiv:1812.11806 (2018)

  26. Kuwata, K., Shibasaki, R.: Estimating crop yields with deep learning and remotely sensed data. In: 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 858–861. IEEE (2015)

    Google Scholar 

  27. Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, ICML, Atlanta, vol. 3, p. 896 (2013)

    Google Scholar 

  28. Lee, J., Nazki, H., Baek, J., Hong, Y., Lee, M.: Artificial intelligence approach for tomato detection and mass estimation in precision agriculture. Sustainability 12(21), 9138 (2020)

    Article  Google Scholar 

  29. Lehnert, C., English, A., McCool, C., Tow, A.W., Perez, T.: Autonomous sweet pepper harvesting for protected cropping systems. IEEE Robot. Autom. Lett. 2(2), 872–879 (2017)

    Article  Google Scholar 

  30. Liang, W., et al.: Expediting large-scale vision transformer for dense prediction without fine-tuning. arXiv preprint arXiv:2210.01035 (2022)

  31. Liang, Y., Ge, C., Tong, Z., Song, Y., Wang, J., Xie, P.: Not all patches are what you need: expediting vision transformers via token reorganizations. arXiv preprint arXiv:2202.07800 (2022)

  32. Liu, X., Xing, F., Yang, C., El Fakhri, G., Woo, J.: Adapting off-the-shelf source segmenter for target medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 549–559. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_51

    Chapter  Google Scholar 

  33. Liu, X., et al.: Deep unsupervised domain adaptation: a review of recent advances and perspectives. APSIPA Trans. Sig. Inf. Process. 11(1) (2022)

    Google Scholar 

  34. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  35. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  36. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)

  37. Luo, Y., Wang, Z., Chen, Z., Huang, Z., Baktashmotlagh, M.: Source-free progressive graph learning for open-set domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 45(9), 11240–11255 (2023)

    Article  Google Scholar 

  38. Luo, Z., Yang, W., Yuan, Y., Gou, R., Li, X.: Semantic segmentation of agricultural images: a survey. Inf. Process. Agric. (2023)

    Google Scholar 

  39. Madec, S., et al.: VegAnn, vegetation annotation of multi-crop RGB images acquired under diverse conditions for segmentation. Sci. Data 10(1), 302 (2023)

    Article  Google Scholar 

  40. Marani, R., Milella, A., Petitti, A., Reina, G.: Deep neural networks for grape bunch segmentation in natural images from a consumer-grade camera. Precision Agric. 22, 387–413 (2021). https://doi.org/10.1007/s11119-020-09736-0

    Article  Google Scholar 

  41. Mavridou, E., Vrochidou, E., Papakostas, G.A., Pachidis, T., Kaburlasos, V.G.: Machine vision systems in precision agriculture for crop farming. J. Imaging 5(12), 89 (2019)

    Article  Google Scholar 

  42. Meyer, G.E., Neto, J.C.: Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 63(2), 282–293 (2008)

    Article  Google Scholar 

  43. Milioto, A., Lottes, P., Stachniss, C.: Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2229–2235. IEEE (2018)

    Google Scholar 

  44. Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D.: Image segmentation using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(7), 3523–3542 (2021)

    Google Scholar 

  45. Naseer, M.M., Ranasinghe, K., Khan, S.H., Hayat, M., Shahbaz Khan, F., Yang, M.H.: Intriguing properties of vision transformers. In: Advances in Neural Information Processing Systems, vol. 34, pp. 23296–23308 (2021)

    Google Scholar 

  46. Olsson, V., Tranheden, W., Pinto, J., Svensson, L.: ClassMix: segmentation-based data augmentation for semi-supervised learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1369–1378 (2021)

    Google Scholar 

  47. Ouhami, M., Hafiane, A., Es-Saady, Y., El Hajji, M., Canals, R.: Computer vision, IoT and data fusion for crop disease detection using machine learning: a survey and ongoing research. Remote Sens. 13(13), 2486 (2021)

    Article  Google Scholar 

  48. Paul, S., Chen, P.Y.: Vision transformers are robust learners. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2071–2081 (2022)

    Google Scholar 

  49. Payne, A.B., Walsh, K.B., Subedi, P., Jarvis, D.: Estimation of mango crop yield using image analysis-segmentation method. Comput. Electron. Agric. 91, 57–64 (2013)

    Article  Google Scholar 

  50. Phadikar, S., Goswami, J.: Vegetation indices based segmentation for automatic classification of brown spot and blast diseases of rice. In: 2016 3rd International Conference on Recent Advances in Information Technology (RAIT). pp. 284–289. IEEE (2016)

    Google Scholar 

  51. Rakhmatulin, I., Kamilaris, A., Andreasen, C.: Deep neural networks to detect weeds from crops in agricultural environments in real-time: a review. Remote Sens. 13(21), 4486 (2021)

    Article  Google Scholar 

  52. Rao, Y., Zhao, W., Liu, B., Lu, J., Zhou, J., Hsieh, C.J.: DynamicViT: efficient vision transformers with dynamic token sparsification. In: Advances in Neural Information Processing Systems, vol. 34, pp. 13937–13949 (2021)

    Google Scholar 

  53. Renggli, C., Pinto, A.S., Houlsby, N., Mustafa, B., Puigcerver, J., Riquelme, C.: Learning to merge tokens in vision transformers. arXiv preprint arXiv:2202.12015 (2022)

  54. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  55. Ryoo, M., Piergiovanni, A., Arnab, A., Dehghani, M., Angelova, A.: TokenLearner: adaptive space-time tokenization for videos. In: Advances in Neural Information Processing Systems, vol. 34, pp. 12786–12797 (2021)

    Google Scholar 

  56. Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)

    Google Scholar 

  57. Sakaridis, C., Dai, D., Hecker, S., Van Gool, L.: Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 707–724. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_42

    Chapter  Google Scholar 

  58. Sharma, A., Jain, A., Gupta, P., Chowdary, V.: Machine learning applications for precision agriculture: a comprehensive review. IEEE Access 9, 4843–4873 (2020)

    Article  Google Scholar 

  59. Sifre, L., Mallat, S.: Rigid-motion scattering for texture classification. arXiv preprint arXiv:1403.1687 (2014)

  60. Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: Advances in Neural Information Processing Systems, vol. 33, pp. 596–608 (2020)

    Google Scholar 

  61. Sultana, F., Sufian, A., Dutta, P.: Evolution of image segmentation using deep convolutional neural network: a survey. Knowl.-Based Syst. 201, 106062 (2020)

    Article  Google Scholar 

  62. Sun, B., Feng, J., Saenko, K.: Return of frustratingly easy domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  63. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  64. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  65. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  66. Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  67. Tavera, A., Arnaudo, E., Masone, C., Caputo, B.: Augmentation invariance and adaptive sampling in semantic segmentation of agricultural aerial images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1656–1665 (2022)

    Google Scholar 

  68. Tranheden, W., Olsson, V., Pinto, J., Svensson, L.: DACS: domain adaptation via cross-domain mixed sampling. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1379–1389 (2021)

    Google Scholar 

  69. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  70. Wang, W., Siau, K.: Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: a review and research agenda. J. Database Manag. (JDM) 30(1), 61–79 (2019)

    Article  Google Scholar 

  71. Wang, Z., Luo, Y., Qiu, R., Huang, Z., Baktashmotlagh, M.: Learning to diversify for single domain generalization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 834–843 (2021)

    Google Scholar 

  72. Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: simple and efficient design for semantic segmentation with transformers. In: Advances in Neural Information Processing Systems, vol. 34, pp. 12077–12090 (2021)

    Google Scholar 

  73. Yang, Y., Soatto, S.: FDA: Fourier domain adaptation for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4085–4095 (2020)

    Google Scholar 

  74. Yuan, Y., Chen, X., Wang, J.: Object-contextual representations for semantic segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 173–190. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_11

    Chapter  Google Scholar 

  75. Zhang, P., Zhang, B., Zhang, T., Chen, D., Wang, Y., Wen, F.: Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12414–12424 (2021)

    Google Scholar 

  76. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890 (2017)

    Google Scholar 

  77. Zhou, D., et al.: Understanding the robustness in vision transformers. In: International Conference on Machine Learning, pp. 27378–27394. PMLR (2022)

    Google Scholar 

  78. Zou, Y., Yu, Z., Vijaya Kumar, B.V.K., Wang, J.: Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 297–313. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_18

    Chapter  Google Scholar 

  79. Zou, Y., Yu, Z., Liu, X., Kumar, B., Wang, J.: Confidence regularized self-training. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5982–5991 (2019)

    Google Scholar 

Download references

Acknowledgement

This work was partially supported by UoQ2003-011RTX (2020–2024).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bowen Yuan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yuan, B., Wang, Z., Yu, X. (2024). Towards Reliable and Efficient Vegetation Segmentation for Australian Wheat Data Analysis. In: Bao, Z., Borovica-Gajic, R., Qiu, R., Choudhury, F., Yang, Z. (eds) Databases Theory and Applications. ADC 2023. Lecture Notes in Computer Science, vol 14386. Springer, Cham. https://doi.org/10.1007/978-3-031-47843-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47843-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47842-0

  • Online ISBN: 978-3-031-47843-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics