Automatic segmentation of cattle rib-eye area in ultrasound images using the UNet++ deep neural network

https://doi.org/10.1016/j.compag.2022.106818Get rights and content

Highlights

Abstract

Ultrasound imaging is commonly used to estimate the size of various cuts of meat or quality traits in live animals. Unfortunately, ultrasound images are known for having large amount of visual noise, which can make it difficult to define the exact boundaries or shapes of the regions of the interest in these images. Therefore, new strategies related to the digital image processing field are required to improve the process of obtaining information from these groups of images. In this context, artificial intelligence, through deep learning methods particularly, has proved to be an optimized and efficient strategy, but that has not yet been investigated in the cattle rib-eye area. This paper aims to investigate the feasibility of applying the Unet++ deep neural network to automatic segmentation of cattle rib-eye area in ultrasound images. Additionally, several well established deep learning semantic segmentation models are compared with Unet++ performance. These architectures are FCN, U-Net, SegNet, and Deeplab v3+. The models were tested on a dataset composed of gray scale images of cattle ultrasound. All models showed excellent results in both location and boundaries. Best results showed 97.37% in IoU, 1.14cm2 in MAE and coefficient of determination (R2) of 0.999. The labeled rib-eye area dataset used in this study is available for future research.

Introduction

The growth of livestock production in the world has been increasing on a large scale, especially in emerging countries, such as Brazil and China (Salter, 2017). Livestock production is one of the most important productive sectors in Brazil with more than 210 million heads of cattle (de Souza et al., 2019). Only in 2020, 14,740 million heads of cattle were slaughtered under some type of health inspection service in Brazil, and this sector occupies about 20% of the Brazilian Gross Domestic Product (GDP) (IBGE, 2021).

The carcasses show variability in several traits, as Intramuscular Fat (IMF), marbling and rib-eye area. This variability in traits, such as juiciness and flavor, are related to quality and boning yields, which influence marketing and economic results (Felício, 2010, Kvam and Kongsro, 2017). Ultrasound imaging is commonly used to estimate the size of various cuts of meat or quality traits in live animals (Booth et al., 2006). Unfortunately, ultrasound images are known for having large amount of noise, which can make it difficult to define the exact boundaries or shapes of the regions in these images (Arias et al., 2007). In this regard, new strategies related to the digital image processing field are required to improve the process of extracting information from these groups of images.

In (Booth et al., 2006), the authors proposed a method for the segmentation of pork loins using ultrasound images based on the Active Contour (AC) framework. The method relies on an external measure method based on global pixel intensity. The algorithm presented 83.26% of Intersection over Union (IoU) score. Arias et al. (2007) proposed a method that combines shape priors and image information to achieve automatic estimation of rib-eye area in ultrasound images. The method evolves a curve corresponding to the shape and location of the correct rib-eye area. In the result analysis is pointed that 98.69% of the images had IoU error range smaller than 20%, and 84% was smaller than 15% considering the rigorous measure. In (Kvam and Kongsro, 2017), a custom Deep Learning to predict intramuscular fat in breeding pigs was proposed. The final correlation obtained was of R=0.74 and Root Mean Squared Error (RMSE) of 1.8. The proposed Convolutional Neural Network (CNN) is simple but the results were encouraging.

Deep Learning is applied in different contexts like medical images analysis (Shen et al., 2017), estimation of leaf area index and defoliation (Albughdadi et al., 2021, da Silva et al., 2019), segmentation and classification of trees (Liu et al., 2019), detection of tree species (Freudenberg et al., 2019), and etc. However, deep learning applied to automatic segmentation of cattle ultrasound images is a relatively new area and consequently the related literature is very limited. In addition, rib-eye segmentation presents an extra challenge which is the absence of part of the boundaries of the region of interest. Therefore, the segmentation methods need to estimate the limits without visual characteristics appearing in the image due to noise or limitations of the capture device.

Currently, the Deep Learning (Osco et al., 2021) literature has presented several well-established and state-of-the-art semantic segmentation methods to support the automatic segmentation of images. Nonetheless, up to the writing moment, there is no evidence about the usage of deep networks to segment the cattle rib-eye areas in ultrasound images. To fulfill this gap, we evaluated the performance of different state-of-the-art architectures, like the Unet++, FCN, U-Net, SegNet, and Deeplab v3+, to segment cattle rib-eye area in ultrasound imagery. To guarantee the generalizability capacity of the applied neural networks, we also adopted images with intense visual noise in which part of the boundaries is not visible, and the results remain promising. Moreover we reported results with input images at different resolutions, showing that even low resolution images (128x128 pixels) provide satisfactory results. These results are important for the inclusion of such methods in embedded systems. As an additional contribution, we are freely made available1 the labeled rib-eye area dataset for that future research can use it in other investigations.

This paper is organized as follows. Section 2 describes the compared CNN methods, its parameters and training schema. The models are evaluated and discussed in Section 3. Finally, in Section 4, the conclusion and future work are presented.

Section snippets

Dataset

The dataset was obtained from measurements performed at Federal University of Mato Grosso do Sul (UFMS), Campo Grande, MS, Brazil. The entire experiment was approved by the animal use ethics committee. To compose the dataset, animals of the species Bos taurus taurus aged between 8 and 36 months were used. During April 2018, an image of each of the 67 animals was recorded with an Aloka SSD-500 ultrasound system. The images were taken in the Longissimus muscle in the region between the 12th and

Quantitative results

Table 1 shows the results of all methods in segmenting ultrasound images associated with the cattle rib-eye area. Considering the use of different image resolutions, we can see that SegNet benefits when a higher resolution is used (e.g., accuracy increases from 86.06 to 90.98 and IoU from 82.13 to 85.60 when the resolution increases from 128×128 to 512×512). With a smaller impact, FCN also increases its accuracy and IoU when images with 256×256 pixels are used. On the other hand, UNet, UNet++

Conclusion

In this paper, it was presented a investigation about deep learning capabilities in automatic segmentation of cattle rib-eye area in ultrasound images. Several state-of-the-art deep network were investigated and the models showed good performance in segmentation results, both in location and boundaries. The UNet++ model presents the best overall results, achieving 97.37% in IoU, 1.14cm2 in MAE and R2=0.999. These findings demonstrated the high capability of the deep learning methods to

Funding

This work was supported by the by CNPq (National Council for Scientific and Technological Development) [Grant Nos.: 303559/2019–5, 433783/2018-4, and 310517/2020-6]; the CAPES (Coordination for the Improvement of Higher Education Personnel) [Grant No. 88881.311850/2018-01].

CRediT authorship contribution statement

Maximilian Jaderson de Melo: Methodology, Formal analysis, Writing – original draft. Diogo Nunes Gonçalves: Software, Writing – review & editing. Marina de Nadai Bonin Gomes: Conceptualization, Validation, Supervision. Jonathan de Andrade Silva: Software. Ana Paula Marques Ramos: Writing – original draft. Lucas Prado Osco: Formal analysis, Writing – review & editing. Michelle Taís Garcia Furuya: Writing – original draft. José Marcato Junior: Conceptualization, Writing – original draft,

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The authors acknowledge the support of the UFMS (Federal University of Mato Grosso do Sul), and the CAPES (Coordination for the Improvement of Higher Education Personnel) (Finance Code 001).

References (28)

  • Felício, P.E.d., 2010. Classificação, tipificação e qualidade da carne bovina. In: VI Congresso Brasileiro de Ciência e...
  • M. Freudenberg et al.

    Large scale palm tree detection in high resolution satellite images using u-net

    Remote Sensing

    (2019)
  • Glorot, X., Bengio, Y., 2010. Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y.W.,...
  • He, K., Zhang, X., Ren, S., Sun, J., 2015. Deep residual learning for image recognition....
  • Cited by (8)

    • Enhanced Assessment of Beef Longissimus Dorsi Muscle Using Structured Illumination Imaging with Deep Learning

      2023, Proceedings of SPIE - The International Society for Optical Engineering
    View all citing articles on Scopus
    View full text