Abstract
The visible (VI) and infrared (IR) image fusion has been an active research task because of its higher segmentation accuracy rate during recent years. However, traditional VI and IR image fusion algorithms could not extract more texture and edge features of fused image. In order to more effectively extract pig-body shape and temperature feature, a new multisource fusion algorithm for shape segmentation and temperature extraction is presented based on convolutional neural network (CNN), named as MCNNFuse. Firstly, visible and infrared images are fused by modified CNN fusion model. Then, shape feature is extracted by Otsu threshold and morphological operation in view of fusion results. Finally, pig-body temperature feature is extracted based on shape segmentation. Experimental results show that segmentation model based on presented fusion method is capable of achieving 1.883–7.170% higher average segmentation accuracy rate than prevalent traditional and previously published methods. Furthermore, it establishes the groundwork for accurate measurement of pig-body temperature.
Similar content being viewed by others
References
Alsaaod M, Syring C, Dietrich J, Doherr MG, Gujan T, Steiner A (2014) A field trial of infrared thermography as a non-invasive diagnostic tool for early detection of digital dermatitis in dairy cows. Vet J 199(2):281–285
Bai, Xiangzhi (2015) Infrared and visual image fusion through feature extraction by morphological sequential toggle operator. Infrared Phys Technol 71:77–86
Bai X, Zhou F, Xue B (2011) Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform. Opt Express 19(9):8444–8457
Bhatnagar G, Wu QMJ, Liu Z (2015) A new contrast based multimodal medical image fusion framework. Neurocomputing 157:143–152
Cheng B, Jin L, Li G (2018) A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive dual-pcnn in nsst domain. Infrared Phys Technol 91:153–163
Font-I-Furnols M, Carabús A, Pomar C, Gispert M (2015) Estimation of carcass composition and cut compisition from computed tomography images of live growing pigs of different genotypes. Animal 9(01):166–178
Jia Y, Rong C, Zhu Y, Yang Y, & Wang Y (2016) Multi-focus Image Fusion Scheme Using Adaptive Dual-Channel Pulse Coupled Neural Network. 2016 8th international conference on intelligent human-machine systems and cybernetics (IHMSC). IEEE
Jin X, Jiang Q, Yao S et al (2018) Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain. Infrared Phys Technol
Kashiha MA, Bahr C, Ott S, Moons C, Niewold TA, and Tuyttens F (2013) Automatic monitoring of pig activity using image analysis. Advanced Concepts for Intelligent Vision Systems. Springer International Publishing
Kawasue K, Win KD, Yoshida K et al (2017) Black cattle body shape and temperature measurement using thermography and kinect sensor. Artif Life Robot
Kong, Weiwei (2014) Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform. Infrared Phys Technol 63:110–118
Kong W, Zhang L, Lei Y (2014) Novel fusion method for visible light and infrared images based on nsst–sf–pcnn. Infrared Phys Technol
Kong W, Lei Y, Ren M (2016) Fusion method for infrared and visible images based on improved quantum theory model. Neurocomputing 212:1637–1640
Li H, Wu XJ (2019) Densefuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623
Liu W, Shi H, Pan S, Huang Y, & Wang Y (2018) An improved Otsu multi-threshold image segmentation algorithm based on pigeon-inspired optimization. 2018 11th international congress on image and signal processing, BioMedical engineering and informatics (CISP-BMEI)
Ma K, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process Publ IEEE Signal Process Soc 24(11):3345
Ma Y, Chen J, Chen C, Fan F, & Ma J (2016) Infrared and visible image fusion using total variation model. Neuruting
Ma J, Yu W, Liang P, Li C, Jiang J (2019) Fusiongan: a generative adversarial network for infrared and visible image fusion. Inf Fusion 48:11–26
MaJ CC, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100–109
Nemalidinne SM, Gupta D (2018) Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering [J]. Fire Saf J
Fischler MA, Bolles RC (1987) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography - sciencedirect [J]. Readings in Computer Vision, 726–740.
Prabhakar KR, Srikar VS, & Babu RV (2017) Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs
Ribeiro F, Caldara Sousa L, Santos D, Teixeira S, & Machado et al. (2014). Piglets' surface temperature change at different weights at birth. Asian-Aust J Anim Sci
Shaji N, & Nisha KL (2016) A novel medical image fusion scheme employing sparse representation and dual PCNN in the NSCT domain. Region 10 Conference (TENCON), 2016 IEEE. IEEE
Shen G, & Luo Z (2011) On the research of pig individual identification and automatic weight collecting system. International Conference on Digital Manufacturing & Automation. IEEE
Stajnko D, Brus M, Hočevar M (2008) Estimation of bull live weight through thermographically measured body dimensions. Comput Electron Agric 61(2):233–240
Tang H, Xiao B, Li W, & Wang G (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci
Wan W, Yang Y, & Lee HJ (2018) Practical remote sensing image fusion method based on guided filter and improved SML in the NSST domain. Signal Image Video Process
Xiang T, Yan L, Gao R (2015) A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain. Infrared Phys Technol 69:53–61
Xin J, Gao C, Jingyu H, Qian J, Dongming Z, Shaowen Y (2018) Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and s-pcnns in hsv space. Signal Process 153:S0165168418302640
Ye W, & Xin H (2000) Thermographical quantification of physiological and behavioral responses of group-housed young pigs. Trans ASAE
Zhang Q, Guo BL (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346
Zhao J, Cui G, Gong X, Zang Y, Tao S, Wang D (2017) Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Phys Technol 81:201–209
Zhong Z, Gao W, Khattak AM, Wang M (2020) A novel multi-source image fusion method for pig-body multi-feature detection in nsct domain. Multimed Tools Appl 79(35–36):26225–26244
Zhong Z, Wang M, Gao W, & Zheng L (2020) A novel multisource pig-body multifeature fusion method based on gabor features. Multidim Syst Sign Process 1-24
Zhong Z, Wang M, Gao W (2020) A multisource image fusion method for multimodal pig-body feature detection. KSII Trans Internet Inf Syst 14(11):4395–4412
Acknowledgements
The author would like to thank her colleagues for their support of this work. The detailed comments from the anonymous reviewers were gratefully acknowledged. This work is jointly supported by the Fundamental Research Funds for Tianjin University of Technology and Education (No. KRKC012105).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhong, Z. A novel visible and infrared image fusion method based on convolutional neural network for pig-body feature detection. Multimed Tools Appl 81, 2757–2775 (2022). https://doi.org/10.1007/s11042-021-11675-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-021-11675-5