Skip to main content

Extraction Method of Traceability Target Track in Grain Depot Based on Target Detection and Recognition

  • Conference paper
  • First Online:
Book cover Data Science (ICDS 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1179))

Included in the following conference series:

  • 1123 Accesses

Abstract

Food security is related to the national economy and people’s livelihood. The grain storage security is the key to achieve food security, therefore, relevant departments have attached great importance to the traceability of the warehousing link. In this paper, we use R-CNN algorithm to extract the target information related to traceability from monitoring videos in the warehouse, then match the targets in adjacent frame based on the class, location and image feature, and finally connect the same target in the adjacent frames to obtain the running track of the target in current monitoring scene. It can be seen from the experimental results that the target detection and recognition algorithm based on R-CNN can reach a higher level in recognition accuracy and detection rate, which meets the needs of real-time analysis. At the same time, the multi-factor target matching fusion algorithm can balance the matching accuracy and matching efficiency, and is a relatively better target matching method. Trajectory data extracted based on the above data can directly reflect the running route of each target. It is also easier for the public to accept such data as the evidence and basis for traceability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kai-Qi, H., Xiao-Tang, C., Yun-Feng, K., et al.: Intelligent visual surveillance: a review. Chin. J. Comput. 20(6), 1093–1118 (2015)

    MathSciNet  Google Scholar 

  2. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  3. Lowe, D.G., Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  4. Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_32

    Chapter  Google Scholar 

  5. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2005, vol. 1, pp. 886–893. IEEE (2005)

    Google Scholar 

  6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems. Curran Associates Inc., pp. 1097–1105 (2012)

    Google Scholar 

  7. Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2014)

    Article  MathSciNet  Google Scholar 

  8. Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci., 1409–1556 (2014)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition, pp. 770–778. IEEE (2016)

    Google Scholar 

  11. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  12. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, pp. 779–788 (2016)

    Google Scholar 

  13. Kaiming, H., Georgia, G., Piotr, D., et al.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 1, 2961–2969 (2018)

    Google Scholar 

  14. Zhou, X., Yang, C., Yu, W.: Moving object detection by detecting contiguous outliers in the low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 597–610 (2013)

    Article  Google Scholar 

  15. Gao, X., Boult, T.E., Coetzee, F., et al.: Error analysis of background adaption. In: Conference on Computer Vision and Pattern Recognition. DBLP, pp. 1503–1510 (2000)

    Google Scholar 

  16. Lee, J., Sandhu, R., Tannenbaum, A.: Particle filters and occlusion handling for rigid 2D–3D pose tracking. Comput. Vis. Image Underst. 117(8), 922–933 (2013)

    Article  Google Scholar 

  17. Barbu, T.: Pedestrian detection and tracking using temporal differencing and HOG features. Comput. Electr. Eng. 40(4), 1072–1079 (2014)

    Article  Google Scholar 

  18. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)

    Article  Google Scholar 

  19. Xiong, Y.: Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality. University of Central Florida, Orlando (2014)

    Google Scholar 

  20. Vedaldi, A., Gulshan, V., Varma, M., et al.: Multiple kernels for object detection. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 606–613. IEEE (2009)

    Google Scholar 

  21. Zhang, J., Huang, K., Yu, Y., et al.: Boosted local structured hog-lbp for object localization. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1393–1400. IEEE (2011)

    Google Scholar 

Download references

Acknowledgements

This work was supported by China Special Fund for Grain-scientific Research in the Public Interest (201513004), National Science Foundation of China (61403188), Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (14KJA520001, 16KJA170003, 15KJA120001), and the Youth Foundation of Nanjing Institute of Technology (QKJ201803).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianshu Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Li, B., Sun, J., Mao, B., Lu, A. (2020). Extraction Method of Traceability Target Track in Grain Depot Based on Target Detection and Recognition. In: He, J., et al. Data Science. ICDS 2019. Communications in Computer and Information Science, vol 1179. Springer, Singapore. https://doi.org/10.1007/978-981-15-2810-1_53

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-2810-1_53

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-2809-5

  • Online ISBN: 978-981-15-2810-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics