Skip to main content

Automatic Picture-Matching of Crested Newts

  • Conference paper
  • First Online:
Cooperative Design, Visualization, and Engineering (CDVE 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12983))

  • 909 Accesses

Abstract

This article highlights a group of image processing, deep learning, and pattern matching techniques that can be used together in order to automatically identify different specimens of newts from a single species (Triturus cristatus Laurenti 1768). First, each image of newt will be: augmented, segmented, straightened. Then, patterns of images will be detected and compared between each other, allowing the differentiation of newts living in selected areas.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://www.amphident.de/en/index.html.

  2. 2.

    https://scikit-image.org/.

  3. 3.

    http://www.robots.ox.ac.uk/~vgg/software/via/via.html.

References

  1. Didry, Yoanne, Mestdagh, Xavier, Tamisier, Thomas: Newtrap: improving biodiversity surveys by enhanced handling of visual observations. In: Luo, Yuhua (ed.) CDVE 2019. LNCS, vol. 11792, pp. 277–281. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30949-7_32

    Chapter  Google Scholar 

  2. Didry, Y., Mestdagh, X., Tamisier,T.: Visualizing features on classified fauna images using class activation maps. In: CDVE, pp. 352–356 (2020)

    Google Scholar 

  3. Lowe, D.G.: Distinctive image features from scale-invariant Keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  4. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988). https://doi.org/10.1007/BF00133570

    Article  MATH  Google Scholar 

  5. Demarty, C-H., Beucher, S.: Color Segmentation algorithm using an HLS Transformation. In: Mathematical Morphology and its Applications to Image and Signal Processing, p. 8 (2000). https://doi.org/10.1007/b117970

  6. Hou, X., Zhang. L.: Saliency detection: a spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, Minneapolis, June 2007. https://doi.org/10.1109/CVPR.2007.383267

  7. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv:1311.2524 [cs], October (2014). http://arxiv.org/abs/1311.2524

  8. Ronneberger, O., Fischer, P., Brox. T.: U-Net: convolutional networks for biomedical image segmentation. arXiv:1505.04597 [cs], May 2015. http://arxiv.org/abs/1505.04597

  9. Pan, S.J., and Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10) , pp. 1345–1359 (2009)

    Google Scholar 

  10. Zhang, T.Y., Suen, C.: A fast parallel algorithm for thinning digital patterns. Commun. ACM (1984). https://doi.org/10.1145/357994.358023

    Article  Google Scholar 

  11. McConville, R., Santos-Rodriguez, R., Piechocki, R.J., Craddock, I.: N2D: (Not Too) Deep clustering via clustering the local manifold of an autoencoded embedding. arXiv:1908.05968 [cs, stat], June 2020. http://arxiv.org/abs/1908.05968

  12. Campello, R.J.G.B., Moulavi, D., Sander, J.: Density-Based clustering based on hierarchical density estimates. In: Pei, J., Tseng, V.S., Cao, L., Motoda, H., Xu, G. (eds.) PAKDD 2013. LNCS (LNAI), vol. 7819, pp. 160–172. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37456-2_14

    Chapter  Google Scholar 

  13. Asano, M., Rupprecht, C., Vedaldi, A.: Self-labelling via simultaneous clustering and representation learning. arXiv:1911.05371 [cs], February 2020. http://arxiv.org/abs/1911.05371

  14. Hermans, A., Beyer, L., Leibe, B.:In defense of the triplet loss for person re-identification. arXiv:1703.07737 [cs], November 2017. http://arxiv.org/abs/1703.07737

  15. Matthé, M., et al.: Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies. Ecol. Evol. 7(15), 5861–5872 (2017). https://doi.org/10.1002/ece3.3140

Download references

Acknowledgement

We would like to thanks Christian Hundt from NVIDIA AI Technology Center, for his very valuable advices throughout the creation of this work. Thanks to Remy Haas and Lionel L’Hoste for retrieving the pictures on the field and annotating them in Newtrap Manager. This work has been financed by the Luxembourg FNR through the POC17 NEWTRAP.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yoanne Didry or Xavier Mestdagh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Magnette, G., Didry, Y., Mestdagh, X. (2021). Automatic Picture-Matching of Crested Newts. In: Luo, Y. (eds) Cooperative Design, Visualization, and Engineering. CDVE 2021. Lecture Notes in Computer Science(), vol 12983. Springer, Cham. https://doi.org/10.1007/978-3-030-88207-5_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88207-5_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88206-8

  • Online ISBN: 978-3-030-88207-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics