Skip to main content
Log in

A novel visible and infrared image fusion method based on convolutional neural network for pig-body feature detection

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The visible (VI) and infrared (IR) image fusion has been an active research task because of its higher segmentation accuracy rate during recent years. However, traditional VI and IR image fusion algorithms could not extract more texture and edge features of fused image. In order to more effectively extract pig-body shape and temperature feature, a new multisource fusion algorithm for shape segmentation and temperature extraction is presented based on convolutional neural network (CNN), named as MCNNFuse. Firstly, visible and infrared images are fused by modified CNN fusion model. Then, shape feature is extracted by Otsu threshold and morphological operation in view of fusion results. Finally, pig-body temperature feature is extracted based on shape segmentation. Experimental results show that segmentation model based on presented fusion method is capable of achieving 1.883–7.170% higher average segmentation accuracy rate than prevalent traditional and previously published methods. Furthermore, it establishes the groundwork for accurate measurement of pig-body temperature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Alsaaod M, Syring C, Dietrich J, Doherr MG, Gujan T, Steiner A (2014) A field trial of infrared thermography as a non-invasive diagnostic tool for early detection of digital dermatitis in dairy cows. Vet J 199(2):281–285

    Article  Google Scholar 

  2. Bai, Xiangzhi (2015) Infrared and visual image fusion through feature extraction by morphological sequential toggle operator. Infrared Phys Technol 71:77–86

    Article  Google Scholar 

  3. Bai X, Zhou F, Xue B (2011) Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform. Opt Express 19(9):8444–8457

    Article  Google Scholar 

  4. Bhatnagar G, Wu QMJ, Liu Z (2015) A new contrast based multimodal medical image fusion framework. Neurocomputing 157:143–152

    Article  Google Scholar 

  5. Cheng B, Jin L, Li G (2018) A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive dual-pcnn in nsst domain. Infrared Phys Technol 91:153–163

    Article  Google Scholar 

  6. Font-I-Furnols M, Carabús A, Pomar C, Gispert M (2015) Estimation of carcass composition and cut compisition from computed tomography images of live growing pigs of different genotypes. Animal 9(01):166–178

    Article  Google Scholar 

  7. Jia Y, Rong C, Zhu Y, Yang Y, & Wang Y (2016) Multi-focus Image Fusion Scheme Using Adaptive Dual-Channel Pulse Coupled Neural Network. 2016 8th international conference on intelligent human-machine systems and cybernetics (IHMSC). IEEE

  8. Jin X, Jiang Q, Yao S et al (2018) Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain. Infrared Phys Technol

  9. Kashiha MA, Bahr C, Ott S, Moons C, Niewold TA, and Tuyttens F (2013) Automatic monitoring of pig activity using image analysis. Advanced Concepts for Intelligent Vision Systems. Springer International Publishing

  10. Kawasue K, Win KD, Yoshida K et al (2017) Black cattle body shape and temperature measurement using thermography and kinect sensor. Artif Life Robot

  11. Kong, Weiwei (2014) Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform. Infrared Phys Technol 63:110–118

    Article  Google Scholar 

  12. Kong W, Zhang L, Lei Y (2014) Novel fusion method for visible light and infrared images based on nsst–sf–pcnn. Infrared Phys Technol

  13. Kong W, Lei Y, Ren M (2016) Fusion method for infrared and visible images based on improved quantum theory model. Neurocomputing 212:1637–1640

    Article  Google Scholar 

  14. Li H, Wu XJ (2019) Densefuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623

    Article  MathSciNet  Google Scholar 

  15. Liu W, Shi H, Pan S, Huang Y, & Wang Y (2018) An improved Otsu multi-threshold image segmentation algorithm based on pigeon-inspired optimization. 2018 11th international congress on image and signal processing, BioMedical engineering and informatics (CISP-BMEI)

  16. Ma K, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process Publ IEEE Signal Process Soc 24(11):3345

    Article  MathSciNet  Google Scholar 

  17. Ma Y, Chen J, Chen C, Fan F, & Ma J (2016) Infrared and visible image fusion using total variation model. Neuruting

  18. Ma J, Yu W, Liang P, Li C, Jiang J (2019) Fusiongan: a generative adversarial network for infrared and visible image fusion. Inf Fusion 48:11–26

    Article  Google Scholar 

  19. MaJ CC, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100–109

    Article  Google Scholar 

  20. Nemalidinne SM, Gupta D (2018) Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering [J]. Fire Saf J

  21. Fischler MA, Bolles RC (1987) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography - sciencedirect [J]. Readings in Computer Vision, 726–740.

  22. Prabhakar KR, Srikar VS, & Babu RV (2017) Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs

  23. Ribeiro F, Caldara Sousa L, Santos D, Teixeira S, & Machado et al. (2014). Piglets' surface temperature change at different weights at birth. Asian-Aust J Anim Sci

  24. Shaji N, & Nisha KL (2016) A novel medical image fusion scheme employing sparse representation and dual PCNN in the NSCT domain. Region 10 Conference (TENCON), 2016 IEEE. IEEE

  25. Shen G, & Luo Z (2011) On the research of pig individual identification and automatic weight collecting system. International Conference on Digital Manufacturing & Automation. IEEE

  26. Stajnko D, Brus M, Hočevar M (2008) Estimation of bull live weight through thermographically measured body dimensions. Comput Electron Agric 61(2):233–240

    Article  Google Scholar 

  27. Tang H, Xiao B, Li W, & Wang G (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci

  28. Wan W, Yang Y, & Lee HJ (2018) Practical remote sensing image fusion method based on guided filter and improved SML in the NSST domain. Signal Image Video Process

  29. Xiang T, Yan L, Gao R (2015) A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain. Infrared Phys Technol 69:53–61

    Article  Google Scholar 

  30. Xin J, Gao C, Jingyu H, Qian J, Dongming Z, Shaowen Y (2018) Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and s-pcnns in hsv space. Signal Process 153:S0165168418302640

    Google Scholar 

  31. Ye W, & Xin H (2000) Thermographical quantification of physiological and behavioral responses of group-housed young pigs. Trans ASAE

  32. Zhang Q, Guo BL (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346

    Article  Google Scholar 

  33. Zhao J, Cui G, Gong X, Zang Y, Tao S, Wang D (2017) Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Phys Technol 81:201–209

    Article  Google Scholar 

  34. Zhong Z, Gao W, Khattak AM, Wang M (2020) A novel multi-source image fusion method for pig-body multi-feature detection in nsct domain. Multimed Tools Appl 79(35–36):26225–26244

    Article  Google Scholar 

  35. Zhong Z, Wang M, Gao W, & Zheng L (2020) A novel multisource pig-body multifeature fusion method based on gabor features. Multidim Syst Sign Process 1-24

  36. Zhong Z, Wang M, Gao W (2020) A multisource image fusion method for multimodal pig-body feature detection. KSII Trans Internet Inf Syst 14(11):4395–4412

    Google Scholar 

Download references

Acknowledgements

The author would like to thank her colleagues for their support of this work. The detailed comments from the anonymous reviewers were gratefully acknowledged. This work is jointly supported by the Fundamental Research Funds for Tianjin University of Technology and Education (No. KRKC012105).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhen Zhong.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, Z. A novel visible and infrared image fusion method based on convolutional neural network for pig-body feature detection. Multimed Tools Appl 81, 2757–2775 (2022). https://doi.org/10.1007/s11042-021-11675-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11675-5

Keywords

Navigation