Abstract
Stomata are pores in the epidermal tissue of plants formed by specialized cells called occlusive cells or guard cells. Analyzing the number and behavior of stomata is a task carried out by studying microscopic images, and that can serve, among other things, to better manage crops in agriculture. However, quantifying the number of stomata in an image is an expensive process since a stomata image might contain dozens of stomata. Therefore, it is interesting to automate such a detection process. This problem can be framed in the context of object detection, a task widely studied in computer vision. Currently, the best approaches to tackle object detection problems are based on deep learning techniques. Although these techniques are very successful, they might be difficult to use. In this work, we face this problem, specifically for the detection of stomata, by building a Jupyter notebook in Google Colaboratory that allows biologists to automatically detect stomata in their images.
Partially supported by Ministerio de Industria, Economía y Competitividad, project MTM2017-88804-P; and Agencia de Desarrollo Económico de La Rioja, project 2017-I-IDD-00018. We also acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alexey, A.B.: YOLO mark (2018). https://github.com/AlexeyAB/Yolo mark
Bloice, M.D., Stocker, C., Holzinger, A.: Augmentor: an image augmentation library for machine learning. J. Open Source Softw. 2, 432 (2017)
Buttery, B.R., Tan, C.S., Buzzell, R.I., Gaynor, J.D., MacTavish, D.C.: Stomatal numbers of soybean and response to water stress. Plant Soil 149(2), 283–288 (1993). https://doi.org/10.1007/BF00016619
Casado-García, A., Heras, J.: Guiding the creation of deep learning-based object detectorss. In: Proceedings of the XVIII Conferencia de la Asociacion Española para la Inteligencia Artificial (CAEPIA 2018), session DEEPL 2018 (2018)
Colaboratory Team: Google Colaboratory (2017). https://colab.research.google.com
Everingham, M., et al.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)
Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)
Heras, J., et al.: CLoDSA: an open-source image augmentation library for object classification, localization, detection and semantic segmentation (2018). https://github.com/joheras/CLoDSA
Hetherington, A.M., Woodward, F.I.: The role of stomata in sensing and driving environmental change. Nature 424(6951), 901–908 (2003)
Hughes, J., et al.: Reducing stomatal density in barley improves drought tolerance without impacting on yield. Plant Physiol. 174(2), 776–787 (2017)
Jung, A.: Imgaug: a library for image augmentation in machine learning experiments (2017). https://github.com/aleju/imgaug
Kluyver, T., et al.: Jupyter notebooks – a publishing format for reproducible computational workflows. In: Proceedings of the 20th International Conference on Electronic Publishing, pp. 87–90. IOS Press (2016)
Lin, T., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2980–2988 (2017). abs/1708.02002
Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Redmon, J., Farhadi, A.: YOLOv3: An Incremental Improvement. CoRR abs/1804.02767 (2018). http://arxiv.org/abs/1804.02767
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28, pp. 91–99 (2015)
Rosebrock, A.: Deep Learning for Computer Vision with Python. PyImageSearch (2018). https://www.pyimagesearch.com/
Sarle, W.S.: Stopped training and other remedies for overfitting. In: Proceedings of the 27th Symposium on the Interface of Computing Science and Statistics, pp. 352–360 (1995)
Simard, P., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the 12th International Conference on Document Analysis and Recognition (ICDAR 2003), vol. 2, pp. 958–964 (2003)
Simard, P., Victorri, B., LeCun, Y., Denker, J.S.: Tangent prop - a formalism for specifying selected invariances in an adaptive network. In: Proceedings of the 4th International Conference on Neural Information Processing Systems (NIPS 1991). Advances in Neural Information Processing Systems, vol. 4, pp. 895–903 (1992)
Tzutalin, D.: LabelImg (2015). https://github.com/tzutalin/labelImg
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Casado-García, Á., Heras, J., Sanz-Sáez, A. (2020). Google Colaboratory for Quantifying Stomata in Images. In: Moreno-Díaz, R., Pichler, F., Quesada-Arencibia, A. (eds) Computer Aided Systems Theory – EUROCAST 2019. EUROCAST 2019. Lecture Notes in Computer Science(), vol 12014. Springer, Cham. https://doi.org/10.1007/978-3-030-45096-0_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-45096-0_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-45095-3
Online ISBN: 978-3-030-45096-0
eBook Packages: Computer ScienceComputer Science (R0)