Skip to main content
Log in

Hatching eggs classification based on deep learning

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In order to realize the fertility detection and classification of hatching eggs, a method based on deep learning is proposed in this paper. The 5-days hatching eggs are divided into fertile eggs, dead eggs and infertile eggs. Firstly, we combine the transfer learning strategy with convolutional neural network (CNN). Then, we use a network of two branches. In the first branch, the dataset is pre-trained with the model trained by AlexNet network on large-scale ImageNet dataset. In the second branch, the dataset is directly trained on a multi-layer network which contains six convolutional layers and four pooling layers. The features of these two branches are combined as input to the following fully connected layer. Finally, a new model is trained on a small-scale dataset by this network and the final accuracy of our method is 99.5%. The experimental results show that the proposed method successfully solves the multi-classification problem in small-scale dataset of hatching eggs and obtains high accuracy. Also, our model has better generalization ability and can be adapted to eggs of diversity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Cogswell M, Ahmed F, Girshick R et al (2016) Reducing overfitting in deep networks by Decorrelating representations. International Conference on Learning Representations

  2. Gao W, Zhou ZH (2016) Dropout Rademacher complexity of deep neural networks. Sci China Inf Sci 59(7):1–12

    Article  Google Scholar 

  3. Gao Z, Zhang LF, Chen MY et al (2014) Enhanced and hierarchical structure algorithm for data imbalance problem in semantic extraction under massive video datase. Multimed Tools Appl 68(3):641–657

    Article  Google Scholar 

  4. Gao Z, Zhang H, Liu AA et al (2016) Human action recognition on depth dataset. Neural Comput & Applic 27(7):2047–2054

    Article  Google Scholar 

  5. Gao Z, Nie W, Liu A et al (2016) Evaluation of local spatial–temporal features for cross-view action recognition. Neurocomputing 173(3):110–117

    Article  Google Scholar 

  6. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. J Mach Learn Res 9:249–256

    Google Scholar 

  7. Glorot X, Bordes A, Bengio Y (2010) Deep sparse rectifier neural networks. J Mach Learn Res:315–323

  8. He K, Zhang X, Ren S et al (2015) Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. International Conference On Computer Vision p.1026–1034

  9. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. Computer vision and. Pattern Recogn:70–778

  10. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. International conference on. Mach Learn:448–456

  11. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. International Conference on Neural Information Processing Systems. pp.1097–1105

  12. Lécun Y et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2323

    Article  Google Scholar 

  13. Liu L, Ngadi MO (2013) Detecting fertility and early embryo development of chicken eggs using near-infrared hyperspectral imaging. Food. Bio/Technology 6(9):2503–2513

    Google Scholar 

  14. Liu AA, Su YT, Nie WZ et al (2017) Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE transactions on Pattern Analysis & Machine. Intelligence 39(1):102–114

    Google Scholar 

  15. Nie WZ, Liu AA, Gao Z, Su Y (2015) Clique-graph matching by preserving global & local structure. Comput Vis Pattern Recognit IEEE:4503–4510

  16. Oquab M, Bottou L, Laptev I, Sivic J (2014) Learning and transferring mid-level image representations using convolutional neural networks. Comput Vis Pattern Recognit:1717–1724

  17. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  18. Russakovsky O, Deng J, Su H et al (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  MathSciNet  Google Scholar 

  19. Shan B (2010) Fertility Detection of Middle-stage Hatching Egg in Vaccine Production Using Machine Vision. International Workshop on Education Technology and Computer Science. ETCS pp.95–98

  20. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations. arXiv preprint arXiv:1409.1556

  21. Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. Computer vision and. Pattern Recogn:1–9

  22. Xu QL, Cui FY (2014) Non-destructive Detection on the Fertility of Injected SPF Eggs in Vaccine Manufacture. Chinese Control and Decision Conference. p.1574–1579

  23. Xu Y, Xu A, Xie T et al (2015) Automatic sorting system of egg embryo in biological vaccines production based on multi-information fusion. Transactions of the Chinese Society for Agricultural. Machinery 46(2):20–26

    Google Scholar 

  24. Yang Y, Xu D, Nie F et al (2010) Image clustering using local discriminant models and global integration. IEEE Trans Image Process 19(10):2761–2773

    Article  MathSciNet  MATH  Google Scholar 

  25. Yang Y, Ma Z, Hauptmann AG et al (2013) Feature selection for multimedia analysis by sharing information among multiple tasks. IEEE Trans Multimed 15(3):661–669

    Article  Google Scholar 

  26. Zhang H, Shang X, Luan H, Wang M, Chua T-S (2016) Learning from collective intelligence: feature learning using social images and tags. ACM Trans Multimed Comput, Commun, Appl (TOMM) 13(1):1–23

    Article  Google Scholar 

  27. Zhang H, Shang X, Yang W, Xu H, Luan H, Chua T (2016) Online collaborative learning for open-vocabulary visual classifiers. Comput Vis Pattern Recognit IEEE 2809–2817

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China under grant No.61771340 and the key technologies R & D program of Tianjin under grant No.14ZCZDGX00033.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhitao Xiao.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Geng, L., Yan, T., Xiao, Z. et al. Hatching eggs classification based on deep learning. Multimed Tools Appl 77, 22071–22082 (2018). https://doi.org/10.1007/s11042-017-5333-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5333-2

Keywords

Navigation