Self-supervised contrastive learning on agricultural images

https://doi.org/10.1016/j.compag.2021.106510Get rights and content
Under a Creative Commons license
open access

Highlights

  • Generation of agricultural pre-trained weights with self-supervised contrastive learning SwAV for three different agricultural datasets.

  • The learned pre-trained weights are transferred to classification as well as segmentation downstream tasks.

  • We show that domain-specific pre-trained weights increase the label efficiency of classification tasks significantly.

Abstract

Agriculture emerges as a prominent application domain for advanced computer vision algorithms. As much as deep learning approaches can help solve problems such as plant detection, they rely on the availability of large amounts of annotated images for training. However, relevant agricultural datasets are scarce and at the same time, generic well-established image datasets such as ImageNet do not necessarily capture the characteristics of agricultural environments. This observation has motivated us to explore the applicability of self-supervised contrastive learning on agricultural images. Our approach considers numerous non-annotated agricultural images, which are easy to obtain, and uses them to pre-train deep neural networks. We then require only a limited number of annotated images to fine-tune those networks in a supervised training manner for relevant downstream tasks, such as plant classification or segmentation. To the best of our knowledge, contrastive self-supervised learning has not been explored before in the area of agricultural images. Our results reveal that it outperforms conventional deep learning approaches in classification downstream tasks, especially for small amounts of available annotated training images where up to 14% increase of average top-1 classification accuracy has been observed. Furthermore, the computational cost for generating data-specific pre-trained weights is fairly low, allowing one to generate easily new pre-trained weights for any custom model architecture or task.

Keywords

Contrastive learning
Deep learning
Self-supervision
SwAV
Transfer-learning

MSC

68T45
68U10

Cited by (0)