Skip to main content
Log in

A CNN-based methodology for cow heat analysis from endoscopic images

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In cattle farming, the artificial insemination technique is a biotechnology that brings to farmers a wide range of benefits namely health security, genetic gain and economic costs. The main condition for the success of artificial insemination within cattle is the heat (or estrus) detection. In this context, several cow heat detection systems have been recently proposed in the literature to assist the farmer in this task. Nevertheless, they are mainly based on the analysis of the physical behavior of the cow which may be affected by several factors related to its health and its environment. In this paper, we present a new vision system for cow heat detection which is based on the analysis of the genital tract of the cow. The main core of our system is a CNN model that has been designed and tailored for analyzing endoscopic images collected using an innovative insemination technology named Eye breed. The conducted experiments on two datasets namely our own dataset and a public dataset show the high accuracy of our CNN model (more than 97% for both datasets) outperforming 19 methods from the state of the art. Moreover, we propose an optimized version of our model for an Android deployment by exploiting several techniques namely quantization, GPU acceleration and video downsampling. The conducted tests on a smart-phone shows that our heat detection system has a response time of a few seconds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. An assisted animal reproduction method consisting of artificially introducing, by a trained breeder, the semen of the bull into the reproductive tract of the cow [3].

  2. https://www.genesdiffusion.com/

  3. http://www.cecna.fr/

  4. https://www.tensorflow.org/

References

  1. Ahmad J, Muhammad K, Lee MY, Baik SW (2017) Endoscopic image classification and retrieval using clustered convolutional features. J Med Syst 41(12):196

    Article  Google Scholar 

  2. Arcidiacono C, Mancino M, Porto S (2020) Moving mean-based algorithm for dairy cow’s oestrus detection from uniaxial-accelerometer data acquired in a free-stall barn. Comput Electron Agric 175:105498

    Article  Google Scholar 

  3. Berry D, Ring S, Twomey A, Evans R (2020) Choice of artificial insemination beef bulls used to mate with female dairy cattle. J Dairy Sci 103(2):1701–1710

    Article  Google Scholar 

  4. Chae Jw., Cho Hc (2021) Identifying the mating posture of cattle using deep learning-based object detection with networks of various settings. J Electr Eng Technol:1–8

  5. Chen KX, Ren JY, Wu XJ, Kittler J (2020) Covariance descriptors on a gaussian manifold and their application to image set classification. Pattern Recogn 107:107463

    Article  Google Scholar 

  6. Choi D, Shallue CJ, Nado Z, Lee J, Maddison CJ, Dahl GE (2019) On empirical comparisons of optimizers for deep learning. arXiv:1910.05446

  7. Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: IEEE Conference on computer vision and pattern recognition, pp 1251–1258

  8. Dalton J, Robinson J, Price W, DeJarnette J, Chapwanya A (2021) Artificial insemination of cattle: Description and assessment of a training program for veterinary students. J Dairy Sci 104(5):6295–6303

    Article  Google Scholar 

  9. Decherf A, Drevillon P (2020) Device for the atraumatic transfer of a material or substance with a reproductive, therapeutic or diagnostic purpose into female mammals. US Patent 10,675,133

  10. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: IEEE Conference on computer vision and pattern recognition, pp 248–255

  11. Gaude I, Kempf A, Strüve KD, Hoedemaker M (2021) Estrus signs in holstein friesian dairy cows and their reliability for ovulation detection in the context of visual estrus detection. Livest Sci 245:104449

    Article  Google Scholar 

  12. Guo Y, Zhang Z, He D, Niu J, Tan Y (2019) Detection of cow mounting behavior using region geometry and optical flow characteristics. Comput Electron Agric 163:104828

    Article  Google Scholar 

  13. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE Conference on computer vision and pattern recognition, pp 770–778

  14. Higaki S, Horihata K, Suzuki C, Sakurai R, Suda T, Yoshioka K (2021) Estrus detection using background image subtraction technique in tie-stalled cows. Animals 11(6):1795

    Article  Google Scholar 

  15. Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, Wang W, Zhu Y, Pang R, Vasudevan V et al (2019) Searching for mobilenetv3. In: IEEE Conference on computer vision, pp 1314–1324

  16. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning, pp 448–456

  17. Kamilaris A, Prenafeta-Boldú FX (2018) Deep learning in agriculture: a survey. Comput Electron Agric 147:70–90

    Article  Google Scholar 

  18. Kumar N, Sukavanam N (2020) A cascaded cnn model for multiple human tracking and re-localization in complex video sequences with large displacement. Multimed Tools Appl 79(9):6109–6134

    Article  Google Scholar 

  19. Kumar N, Sukavanam N (2020) A weakly supervised cnn model for spatial localization of human activities in unconstraint environment. SIViP 14(5):1009–1016

    Article  Google Scholar 

  20. Majid A, Khan MA, Yasmin M, Rehman A, Yousafzai A, Tariq U (2020) Classification of stomach infections: a paradigm of convolutional neural network along with classical features fusion and selection. Microsc Res Tech 83(5):562–576

    Article  Google Scholar 

  21. Nagel M, Baalen Mv, Blankevoort T, Welling M (2019) Data-free quantization through weight equalization and bias correction. In: IEEE Conference on computer vision, pp 1325–1334

  22. Pérez-Hernández F, Tabik S, Lamas A, Olmos R, Fujita H, Herrera F (2020) Object detection binary classifiers methodology based on deep learning to identify small objects handled similarly: Application in video surveillance. Knowl-Based Syst 194:105590

    Article  Google Scholar 

  23. Pogorelov K, Randel KR, Griwodz C, Eskeland SL, de Lange T, Johansen D, Spampinato C, Dang-Nguyen DT, Lux M, Schmidt PT et al (2017) Kvasir: a multi-class image dataset for computer aided gastrointestinal disease detection. In: ACM Conference on multimedia systems, pp 164–169

  24. Pogorelov K, Riegler M, Eskeland SL, de Lange T, Johansen D, Griwodz C, Schmidt PT, Halvorsen P (2017) Efficient disease detection in gastrointestinal videos–global features versus neural networks. Multimed Tools Appl 76(21):22493–22525

    Article  Google Scholar 

  25. Rahman A, Smith D, Little B, Ingham A, Greenwood P, Bishop-Hurley G (2018) Cattle behaviour classification from collar, halter, and ear tag sensors. Inf Process Agricul 5(1):124–133

    Google Scholar 

  26. Roberts JM (2018) Oestrus detector. US Patent 9,913,703

  27. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. In: IEEE Conference on computer vision, pp 618–626

  28. Shahriar MS, Smith D, Rahman A, Freeman M, Hills J, Rawnsley R, Henry D, Bishop-Hurley G (2016) Detecting heat events in dairy cows using accelerometers and unsupervised learning. Comput Electron Agric 128:20–26

    Article  Google Scholar 

  29. Sharpe JC, Rowe P, Vishwanath R, Martinsen PJ (2020) Sensor apparatus and associated systems and methods. US Patent 10,555,504

  30. Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. J Big Data 6(1):1–48

    Article  Google Scholar 

  31. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations

  32. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  33. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: International conference on artificial intelligence

  34. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: IEEE Conference on computer vision and pattern recognition, pp 1–9

  35. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: IEEE Conference on computer vision and pattern recognition, pp 2818–2826

  36. Tiwari AK, Kanhangad V, Pachori RB (2017) Histogram refinement for texture descriptor based image retrieval. Signal Process Image Commun 53:73–85

    Article  Google Scholar 

  37. Unar S, Wang X, Zhang C (2018) Visual and textual information fusion using kernel method for content based image retrieval. Inf Fusion 44:176–187

    Article  Google Scholar 

  38. Wang C, Wang X, Xia Z, Zhang C (2019) Ternary radial harmonic fourier moments based robust stereo image zero-watermarking algorithm. Inf Sci 470:109–120

    Article  Google Scholar 

  39. Wang W, Zhang C, Tian J, Wang X, Ou J, Zhang J, Li J (2020) High-resolution radar target recognition via inception-based vgg (ivgg) networks. Computational Intelligence and Neuroscience 2020

  40. Yang H, Qi S, Tian J, Niu P, Wang X (2021) Robust and discriminative image representation: fractional-order jacobi-fourier moments. Pattern Recogn 115:107898

    Article  Google Scholar 

  41. Yang H, Qi S, Niu P, Wang X (2020) Color image zero-watermarking based on fast quaternion generic polar complex exponential transform. Signal Process Image Commun 82:115747

    Article  Google Scholar 

  42. Zebari HM, Rutter SM, Bleach EC (2018) Characterizing changes in activity and feeding behaviour of lactating dairy cows during behavioural and silent oestrus. Appl Anim Behav Sci 206:12–17

    Article  Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge Claude Grenier, CEO Gènes Diffusion, Pierrick Drevillon CEO CECNA, Olivier Darasse, CEO Elexinn for the availability of data and the labeling of the videos.

Funding

This project has been funded by the FEDER European program, JUNIA French Engineering school and Gènes Diffusion French company.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruiwen He.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(MP4 4.17 MB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, R., Benhabiles, H., Windal, F. et al. A CNN-based methodology for cow heat analysis from endoscopic images. Appl Intell 52, 8372–8385 (2022). https://doi.org/10.1007/s10489-021-02910-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02910-5

Keywords

Navigation