Automatic segmentation of cattle rib-eye area in ultrasound images using the UNet++ deep neural network
Introduction
The growth of livestock production in the world has been increasing on a large scale, especially in emerging countries, such as Brazil and China (Salter, 2017). Livestock production is one of the most important productive sectors in Brazil with more than 210 million heads of cattle (de Souza et al., 2019). Only in 2020, 14,740 million heads of cattle were slaughtered under some type of health inspection service in Brazil, and this sector occupies about 20% of the Brazilian Gross Domestic Product (GDP) (IBGE, 2021).
The carcasses show variability in several traits, as Intramuscular Fat (IMF), marbling and rib-eye area. This variability in traits, such as juiciness and flavor, are related to quality and boning yields, which influence marketing and economic results (Felício, 2010, Kvam and Kongsro, 2017). Ultrasound imaging is commonly used to estimate the size of various cuts of meat or quality traits in live animals (Booth et al., 2006). Unfortunately, ultrasound images are known for having large amount of noise, which can make it difficult to define the exact boundaries or shapes of the regions in these images (Arias et al., 2007). In this regard, new strategies related to the digital image processing field are required to improve the process of extracting information from these groups of images.
In (Booth et al., 2006), the authors proposed a method for the segmentation of pork loins using ultrasound images based on the Active Contour (AC) framework. The method relies on an external measure method based on global pixel intensity. The algorithm presented 83.26% of Intersection over Union (IoU) score. Arias et al. (2007) proposed a method that combines shape priors and image information to achieve automatic estimation of rib-eye area in ultrasound images. The method evolves a curve corresponding to the shape and location of the correct rib-eye area. In the result analysis is pointed that 98.69% of the images had IoU error range smaller than 20%, and 84% was smaller than 15% considering the rigorous measure. In (Kvam and Kongsro, 2017), a custom Deep Learning to predict intramuscular fat in breeding pigs was proposed. The final correlation obtained was of and Root Mean Squared Error (RMSE) of . The proposed Convolutional Neural Network (CNN) is simple but the results were encouraging.
Deep Learning is applied in different contexts like medical images analysis (Shen et al., 2017), estimation of leaf area index and defoliation (Albughdadi et al., 2021, da Silva et al., 2019), segmentation and classification of trees (Liu et al., 2019), detection of tree species (Freudenberg et al., 2019), and etc. However, deep learning applied to automatic segmentation of cattle ultrasound images is a relatively new area and consequently the related literature is very limited. In addition, rib-eye segmentation presents an extra challenge which is the absence of part of the boundaries of the region of interest. Therefore, the segmentation methods need to estimate the limits without visual characteristics appearing in the image due to noise or limitations of the capture device.
Currently, the Deep Learning (Osco et al., 2021) literature has presented several well-established and state-of-the-art semantic segmentation methods to support the automatic segmentation of images. Nonetheless, up to the writing moment, there is no evidence about the usage of deep networks to segment the cattle rib-eye areas in ultrasound images. To fulfill this gap, we evaluated the performance of different state-of-the-art architectures, like the Unet++, FCN, U-Net, SegNet, and Deeplab v3+, to segment cattle rib-eye area in ultrasound imagery. To guarantee the generalizability capacity of the applied neural networks, we also adopted images with intense visual noise in which part of the boundaries is not visible, and the results remain promising. Moreover we reported results with input images at different resolutions, showing that even low resolution images (128x128 pixels) provide satisfactory results. These results are important for the inclusion of such methods in embedded systems. As an additional contribution, we are freely made available1 the labeled rib-eye area dataset for that future research can use it in other investigations.
This paper is organized as follows. Section 2 describes the compared CNN methods, its parameters and training schema. The models are evaluated and discussed in Section 3. Finally, in Section 4, the conclusion and future work are presented.
Section snippets
Dataset
The dataset was obtained from measurements performed at Federal University of Mato Grosso do Sul (UFMS), Campo Grande, MS, Brazil. The entire experiment was approved by the animal use ethics committee. To compose the dataset, animals of the species Bos taurus taurus aged between 8 and 36 months were used. During April 2018, an image of each of the 67 animals was recorded with an Aloka SSD-500 ultrasound system. The images were taken in the Longissimus muscle in the region between the 12th and
Quantitative results
Table 1 shows the results of all methods in segmenting ultrasound images associated with the cattle rib-eye area. Considering the use of different image resolutions, we can see that SegNet benefits when a higher resolution is used (e.g., accuracy increases from 86.06 to 90.98 and IoU from 82.13 to 85.60 when the resolution increases from to ). With a smaller impact, FCN also increases its accuracy and IoU when images with pixels are used. On the other hand, UNet, UNet++
Conclusion
In this paper, it was presented a investigation about deep learning capabilities in automatic segmentation of cattle rib-eye area in ultrasound images. Several state-of-the-art deep network were investigated and the models showed good performance in segmentation results, both in location and boundaries. The UNet++ model presents the best overall results, achieving in IoU, in MAE and . These findings demonstrated the high capability of the deep learning methods to
Funding
This work was supported by the by CNPq (National Council for Scientific and Technological Development) [Grant Nos.: 303559/2019–5, 433783/2018-4, and 310517/2020-6]; the CAPES (Coordination for the Improvement of Higher Education Personnel) [Grant No. 88881.311850/2018-01].
CRediT authorship contribution statement
Maximilian Jaderson de Melo: Methodology, Formal analysis, Writing – original draft. Diogo Nunes Gonçalves: Software, Writing – review & editing. Marina de Nadai Bonin Gomes: Conceptualization, Validation, Supervision. Jonathan de Andrade Silva: Software. Ana Paula Marques Ramos: Writing – original draft. Lucas Prado Osco: Formal analysis, Writing – review & editing. Michelle Taís Garcia Furuya: Writing – original draft. José Marcato Junior: Conceptualization, Writing – original draft,
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
The authors acknowledge the support of the UFMS (Federal University of Mato Grosso do Sul), and the CAPES (Coordination for the Improvement of Higher Education Personnel) (Finance Code 001).
References (28)
- et al.
Estimating soybean leaf defoliation using convolutional neural networks and synthetic images
Comput. Electron. Agric.156
(2019) - et al.
In vivo prediction of intramuscular fat using ultrasound and deep learning
Comput. Electron. Agric.
(2017) - et al.
Computer image analysis for intramuscular fat segmentation in dry-cured ham slices using convolutional neural networks
Food Control
(2019) - et al.
A review on deep learning in uav remote sensing
Int. J. Appl. Earth Obs. Geoinf.
(2021) - et al.
Sugarcane ethanol and beef cattle integration in brazil
Biomass Bioenergy
(2019) - et al.
Towards a massive sentinel-2 lai time-series production using 2-d convolutional networks
Comput. Electron. Agric.
(2021) - et al.
Ultrasound image segmentation with shape priors: application to automatic cattle rib-eye area estimation
IEEE Trans. Image Process.
(2007) - Badrinarayanan, V., Handa, A., Cipolla, R., 2015. Segnet: A deep convolutional encoder-decoder architecture for robust...
- Booth, B., Neighbour, R., Li, X., 2006. On agricultural ultrasound image segmentation. In: Proceedings of IEEE...
- Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H., 2018. Encoder-decoder with atrous separable convolution for...
Large scale palm tree detection in high resolution satellite images using u-net
Remote Sensing
Cited by (8)
Beef marbling assessment by structured-illumination reflectance imaging with deep learning
2024, Journal of Food EngineeringEnhanced Assessment of Beef Longissimus Dorsi Muscle Using Structured Illumination Imaging with Deep Learning
2023, Proceedings of SPIE - The International Society for Optical Engineering