Skip to main content

Learning Image Segmentation from Few Annotations

A REPTILE Application

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12861))

Included in the following conference series:

  • 1140 Accesses

Abstract

How to build machine learning models from few annotations is an open research question. This article shows an application of a meta-learning algorithm (REPTILE) to solve the problem of object segmentation. We evaluate how using REPTILE during a pre-training phase accelerates the learning process without loosing performance of the resulting segmentation in poor labeling conditions, and compare these results against training the detectors using basic transfer learning. Two scenarios are tested: (i) how segmentation performance evolves through training epochs with a fixed amount of labels and (ii) how segmentation performance improves with an increasing amount of labels after a fixed amount of epochs. The results suggest that REPTILE is useful making learning faster in both cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In aerial images it translates into different image ground resolutions and object scales.

  2. 2.

    ImageNet [8]: 14197122 images, Microsoft Common Objects in Context [11]: 2500000 images, CIFAR-10 [18]: 60000 images, CINIC-10 [2]: 270000 images.

  3. 3.

    Not anymore, given the increasing availability of electronic devices capable of capturing images (e.g., smartphones, satellites) nowadays.

  4. 4.

    There are sub-datasets with diverse images of the same type at different resolutions.

  5. 5.

    Annotation may be very time consuming, and that is the reason why we only tested one dataset.

References

  1. Caruana, R.: Multitask learning. In: Thrun, S., Pratt, L. (eds.) Learning to Learn, pp. 95–133. Springer, Boston (1998). https://doi.org/10.1007/978-1-4615-5529-2_5

    Chapter  Google Scholar 

  2. Darlow, L.N., Crowley, E.J., Antoniou, A., Storkey, A.J.: CINIC-10 is not ImageNet or CIFAR-10 (2018)

    Google Scholar 

  3. Paszke, A., et al.: ENet: a deep neural network architecture for real-time semantic segmentation. CoRR, abs/1606.02147 (2016)

    Google Scholar 

  4. Pang, J., et al.: Libra R-CNN: towards balanced learning for object detection. CoRR, abs/1904.02701 (2019)

    Google Scholar 

  5. Redmon, J., et al.: You only look once: unified, real-time object detection. CoRR, abs/1506.02640 (2015)

    Google Scholar 

  6. He, K., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. CoRR, abs/1406.4729 (2014)

    Google Scholar 

  7. He, K., et al.: Mask R-CNN. CoRR, abs/1703.06870 (2017)

    Google Scholar 

  8. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  Google Scholar 

  9. Girshick, R.B., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524 (2013)

    Google Scholar 

  10. Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. CoRR, abs/1506.01497 (2015)

    Google Scholar 

  11. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  12. Liu, W., et al.: SSD: single shot multibox detector. CoRR, abs/1512.02325 (2015)

    Google Scholar 

  13. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-Net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_1

    Chapter  Google Scholar 

  14. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks (2017)

    Google Scholar 

  15. Girshick, R.: Fast R-CNN (2015)

    Google Scholar 

  16. Kamrul Hasan, S.M., Linte, C.A.: U-NetPlus: a modified encoder-decoder u-net architecture for semantic and instance segmentation of surgical instrument. CoRR, abs/1902.08994 (2019)

    Google Scholar 

  17. Hodgson, J., et al.: Drones count wildlife more accurately and precisely than humans. Methods Ecol. Evol. 9, 1160–1167 (2018)

    Article  Google Scholar 

  18. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research)

    Google Scholar 

  19. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. CoRR, abs/1803.02999 (2018)

    Google Scholar 

  20. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger (2016)

    Google Scholar 

  21. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement (2018)

    Google Scholar 

  22. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597 (2015)

    Google Scholar 

  23. Vanschoren, J.: Meta-learning: a survey (2018)

    Google Scholar 

  24. Wang, Y., Yao, Q., Kwok, J., Ni, L.M.: Generalizing from a few examples: a survey on few-shot learning (2019)

    Google Scholar 

  25. Zeng, Z., Xie, W., Zhang, Y., Lu, Y.: RIC-Unet: an improved neural network based on Unet for nuclei segmentation in histology images. IEEE Access 7, 21420–21428 (2019)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Swiss Space Center (SERI/SSO MdP program). All the experiments shown in this paper were performed thanks to a thight collaboration with the company Picterra (https://picterra.ch/). Picterra provided all the datasets used, and participated in constructive discussion about how to deal with large images and how to build object detectors from few examples.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andres Perez-Uribe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Satizábal, H.F., Perez-Uribe, A. (2021). Learning Image Segmentation from Few Annotations. In: Rojas, I., Joya, G., Català, A. (eds) Advances in Computational Intelligence. IWANN 2021. Lecture Notes in Computer Science(), vol 12861. Springer, Cham. https://doi.org/10.1007/978-3-030-85030-2_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85030-2_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85029-6

  • Online ISBN: 978-3-030-85030-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics