Pigeon cleaning behavior detection algorithm based on light-weight network

https://doi.org/10.1016/j.compag.2022.107032Get rights and content

Highlights

  • Proposing a pigeon behavior detection method base on YOLO v4 deep learning algorithm.

  • Using self-made data sets, comparison of multiple target detection models and comparison of multiple lightweight feature extraction networks.

  • Comparative study between parameter, weight size, computation, accuracy and FPS.

  • The proposed method contributes to the development of dovecote inspection robots.

Abstract

The behavior of pigeons in the dovecote reflects their environmental comfort and health indicators. In order to solve the problems of time-consuming, labor-consuming, and subjectivity of traditional manual experience, an improved YOLO V4 light-weight target detection algorithm was proposed for row detection of breeding pigeons. Employ SPP, FPN, and PANet networks to strengthen the features retrieved from GhostNet as the backbone. To ensure accuracy, Ghostnet-yolo V4 reduced the model's number of parameters and raised its size to 43 MB. The light-weight feature extraction network GhostNet outperformed MobileNet V1~V3 under the modified model. Faster RCNN, SSD, YOLO V4 and YOLO V3 compression rates were increased by 43.4 percent, 35.8 percent, 70.1 percent, and 69.1 percent, respectively. The improved algorithm has an accuracy of 97.06 percent and a recognition speed of 0.028 s per frame. The improved model can provide a theoretical foundation and technological reference for detecting breeding pigeon behavior in real-time in a dovecote.

Introduction

Pigeon breeding is a growing sector in poultry breeding that is distinct from the “three birds” of chickens, ducks, and geese. Due to their short feeding cycle, enormous body size, and high protein content, suckling pigeons are also known as meat pigeons. With a strong market demand and year-over-year expansion, large-scale meat pigeon production supplies high-quality meat and eggs to urban and rural inhabitants. The breeding pigeons' health has a direct effect on the quality of the young pigeons. Thus, monitoring and early warning systems for the health and breeding of breeding pigeons are critical. Pigeon behavior is one of the key factors reflecting the environment and health status. The cleanliness and other behaviors of meat pigeons are closely related to the comfort of the loft environment and the pigeons' health. Pigeons take pleasure in grooming their feathers. The stage of this grooming process is referred to in this text as the cleaning behavior, and this sort of conduct is referred to as the individual grooming behavior of individual pigeons. The paper detects the presence and absence of feather grooming, which further assessed as cleaning behavior.Furthermore, the meat pigeon industry plays an important role in adjusting the industrial structure, increasing farmers' income and providing high-quality protein. Moreover, China is a big country in the breeding and consuming of meat pigeon. Therefore, the research on the behavior of meat pigeons can analyze the health status of meat pigeons and further improve the efficiency of meat pigeon breeding. Detection is the premise, data acquisition analysis and decision making in this paper, in the early research phase, but in the individual, and clean the behavior test on certain research progress, development direction, including the behavior recognition pigeon whole clean frequency can reflect the environmental comfort, pigeon whole kissing and interactive behavior to provide the reference for Pigeon whole breeding behavior. Carry out early warning and reminders in abnormal detection and detection. Accurate feeding was carried out through data analysis of pigeon feeding activity to avoid waste. The efficient identification behavior using target detection in breeding requires deploying algorithms to embedded electronic terminals. The lightweight network can effectively solve the problem of the insufficient computing power of the equipment, reduce the cost and improve the possibility of AI deployment identification. However, at the moment, meat pigeons' health condition is mostly maintained manually, which requires a significant number of human resources and is prone to inexperience and missed inspections. As a result, modernization of the meat pigeon business and the use of information technology to monitor pigeon health are critical.

Recently, numerous obstacles have arisen in the production of pigeon meat, including obsolete production methods, untapped potential, significant breed degeneration, unguaranteed product quality and safety, exacerbated epidemic worries, and irregular breeding and management. These concerns significantly influence the pigeon meat industry's development and evolution. Additionally, machine learning technology has been used to disease prevention in pigeon meat, precision control, and other areas to assist the pigeon industry's expansion, adjust the structure of pigeon production, and expedite the industrialized breeding mode (Mathis and Mahis, 2020).

At the moment, poultry detection is mostly accomplished via wearable devices to collect behavioral data or through the use of monitoring equipment to monitor things. However, wearable technology presents challenges in customization, durability, and high maintenance costs. The pigeon industry has not yet reported on deploying an in-depth learning target detection algorithm to monitor breeding pigeons. The primary objective of target detection is to precisely determine the category and position of the target object within an image or video. According to 2012, it is separated between classic and deep learning approaches. The traditional method mainly includes Scale Invariant Feature Transform Description (SITF) (Kasiselvanathan et al., 2020), Histogram of Oriented (HOG)(Dalal et al., 2005), for feature extraction; Deformable Parts Model (DPM) (Suleiman et al., 2016), Support Vector Machine (SVM) (Kucuk and Eminoglu, 2015) perform feature classification; non-maximum suppression algorithm (Non-Maximum Suppression, NMS) (Wan et al., 2015) perform object detection. Traditional detection algorithms rely heavily on manual experience design. Due to the lack of portability, resilience, and generalization capabilities, the detection accuracy of these techniques is lower than that of deep learning algorithms.

Deep learning algorithms are primarily classified into single-stage dense predictive detection algorithms such as the SSD (Single Shot MultiBox Detector) (Liu et al., 2016) series, the YOLO (You Only Look Once version) (Redmon et al., 2016) series, and the RetinaNet series (Lin et al., 2020). Whereas techniques for detecting critical spots include the two-stage Faster R-CNN series (Ren et al., 2017), and the Mask R-CNN (He et al., 2020). Recently, Convolutional Neural Networks (CNN) (Radovic et al., 2017) have been extensively employed for intelligent target recognition. Yan combines transfer learning with pre-trained weights, enabling the Faster RCNN model to locate and classify broccoli (Garca-Manso et al., 2021). Wu employed the channel pruning YOLO v4 algorithm to detect apple blossoms in real time and were able to correctly detect three distinct types of self-made apple flower images. Wu was investigated and evaluated five unique deep learning models in high-precision environments. Apple blooms' robustness was evaluated under a variety of tree species and light conditions in order to provide a technical reference for the development of orchard robots (Wu et al., 2020).

Yan demonstrated an enhanced YOLO v5s capable of performing light-weight network apple target identification, evaluating several network models, and simultaneously increasing the accuracy of occluding apples, thereby providing fundamental technological support for real-time target detection of apple picking robots (Yan et al., 2021). Bonneau uses the compact YOLO v3 to detect goats in pastures and non-invasive image processing to monitor different goat behaviors, allowing outdoor animal monitoring (Bonneau et al., 2020). Wageeh, enhanced underwater pictures and videos using multi-scale Retinex to provide realistic colors. The effectiveness of the proposed method is verified by comparing four other image enhancement algorithms in Jetson TX2 (Tang et al., 2019). Wageeh et al. use the YOLO method for fish detection and trajectory tracking. Firstly, the blurry picture is enhanced using image enhancement algorithms, followed by the detection of the fish target coordinates and, ultimately, extraction of the fish's number and trajectory parameters (Wageeh et al., 2021). Chen et al. Local field potential (LFP) was used to carry animal behavior information (Shen et al., 2019), and the pigeon behavior was decoded based on LFP network method. This method calculated the functional connection strength synchronously, reduced the intensity vector dimension through principal component analysis, and decoded pigeon individual behavior through nearest neighbor method (Chen et al., 2018). Previous studies showed that spatial navigation depends on a local network including multiple brain regions with strong interactions (Zhao et al., 2019, Shang et al., 2019, Li et al., 2021), Li et al. examined neural activity in the NCL of pigeons and explored the local field potentials (LFPs) spectral and functional connectivity patterns in a goal-directed spatial cognitive task with the detour paradigm (Li et al., 2022).

Different experiment was carried out, in which Faster RCNN, SSD, YOLO v3 YOLO v4 comparison was presented to propose a real-time detection of the cleanliness of pigeons breeding to distinguish YOLO effectiveness interm of speed and accuracy. (Liu et al., 2020, (Bochkovskiy et al., 2020); Considering the accuracy, speed, cost and deployment of week power computing devices (raspberry and jetson Nano) in the development of YOLO version. It is more about the improvement of the backbone feature extraction effect and the change of the parameter quantity. As a result, this article uses multiple light-weight feature extraction networks to conduct light-weight processing on the YOLO v4 technique, including MobileNet v1v3 (Howard et al., 2017), Sandler et al., 2018), Howard et al., 2019, and (Han et al., 2020). An experiment was undertaken to accomplish light-weight detection of breeding pigeons' feather combing activity, which is suitable for the deployment of real-time monitoring equipment such as inspection robots.

The YOLOv4 model is used in this research to increase the accuracy of breeding pigeons in complicated situations. Data augmentation techniques like zooming, flipping, and color gamut alteration are used. The light-weight network is employed as the backbone network in this kind of product to extract the underlying characteristics, hence lowering the number of model computations and parameters. The experiment compared the light-weight MobileNet series, CSPDarkNet (Wang et al., 2020), and GhostNet networks, and ultimately chose GhostNet as the backbone network for low-level feature extraction rather than the original CSPDarkNet network, and then used SPP (Spatial Pyramid Pooling) (He et al., 2014), FPN (Feature Pyramid Network) (Xie et al., 2018), and PANet (Path Aggregation Network).

This paper's subsequent contents are as follows: The second section describes the experiment design, data processing and section discusses improvement approaches, the third section discusses experimental findings and analysis, and the last section summarizes.

Section snippets

Dataset collection

The picture data set used in this study originates from a pigeon breeding facility in Xingning City, Meizhou City, Guangdong Province; each pigeon loft has around 2,000 pigeon cages, eight rows of single row pigeon cages, and 83–93 rows of pigeon cages, three rows each row. The lens is positioned 120 cm in front of the pigeon cage, photographing the top two rows of pigeon cages. Following processing, the top and bottom pigeon cage images of a single cage are chosen and intercepted to produce a

Network training

This experiment is performed using an open-source PyTorch deep learning framework. The device has an Intel(R) Xeon(R) Gold 6146 processor running at 3.20 GHz, 128 GB of RAM, and an NVIDIA TITAN RTX graphics card. Ubuntu 16.04.7 LTS is used as the operating system, while Cuda version 11.2, Python version 3.7.10, whereas torch version 1.7.1 is used.

The model selection defines the detection height, and the training data set decides if the detection height is appropriate. Thus, it is critical to

Results and discussion

The improved YOLO v4 method is compared to the Faster RCNN, SSD, YOLO v4, and YOLOv3 target detection algorithms in this experiment. YOLO v4 was improved to evaluate multiple light-weight feature extraction networks, including MobilNetv1, MobilNetv2, MobilNetv3, GhostNet, and other backbones. The experiment was conducted with the IoU threshold set to 0.5. Precision rate, recall rate, average accuracy rate, and F1 were used as assessment indicators. The experiment was conducted on the 100th round

Conclusions and future work

In order to solve the gap of the target detection algorithm in the detection of breeding pigeons, this paper proposed the improved YOLO v4 algorithm for the behavior of breeding pigeons. The experimental findings indicate that, compared to the original model, SSD, Faster RCNN, and other models, the upgraded YOLOv4 model achieves higher accuracy and recall rates. 95.98 percent and 87.13 percent, respectively. The AP50 grew by 88.5 percent, from 88.56 to 97.06 percent. To facilitate future

CRediT authorship contribution statement

Jianjun Guo: Methodology, Writing – original draft. Guohuang He: Methodology, Writing – original draft. Hao Deng: Data curation, Investigation. Wenting Fan: Data curation, Investigation. Longqin Xu: Resources, Visualization, Software. Liang Cao: Resources, Visualization, Software. Dachun Feng: Resources, Visualization, Software. Jingbin Li: Supervision, Validation. Huilin Wu: Supervision, Validation. Jiawei Lv: Data curation, Investigation. Shuangyin Liu: Writing – review & editing. Shahbaz Gul

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61871475, 52175240, 51775358, E0506, in part by the special project of laboratory construction of Guangzhou Innovation Platform Construction Plan under Grant 201905010006, Guangzhou key research and development project under Grant 202103000033, 201903010043, Guangdong Science and Technology Project under Grant 2020A1414050060, National Key Technologies R & D Program of China under Grant

References (40)

  • Han, K., Wang, Y., Tian, Q., Guo, J., Xu, Chunjing, Xu, Chang, 2020. GhostNet: More features from cheap operations....
  • K. He et al.

    Mask R-CNN

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2020)
  • He, K., Zhang, X., Ren, S., Sun, J., 2016. Identity mappings in deep residual networks. Lect. Notes Comput. Sci....
  • He, K., Zhang, X., Ren, S., Sun, J., 2014. Spatial pyramid pooling in deep convolutional networks for visual...
  • Howard, A., Sandler, M., Chen, B., Wang, W., Chen, L.C., Tan, M., Chu, G., Vasudevan, V., Zhu, Y., Pang, R., Le, Q.,...
  • Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H., 2017. MobileNets:...
  • J. Hu et al.

    Squeeze-and-Excitation Networks

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2020)
  • Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal covariate...
  • M. Kasiselvanathan et al.

    Palm pattern recognition using scale invariant feature transform

    Int. J. Intell. Sustain. Comput.

    (2020)
  • Kucuk, H., Eminoglu, I., 2015. Classification of ALS disease using support vector machines 3, 1664–1667....
  • Cited by (11)

    View all citing articles on Scopus
    1

    Contributed equally to this work.

    View full text