Abstract
In this paper we first introduce an airport ground surveillance network, which is composed of data acquisition terminal based on multiple cameras, data transmission based on high-speed optical fiber, and processing terminal including some airport intelligent applications, e.g. intrusion warning and conflict prediction. Next we present a moving object recognition algorithm named AMORnet which is the basis of the intelligent applications in this surveillance network. Unlike the traditional object detection which cannot distinguish static and moving objects and moving object detection requiring accurate silhouette segmentation, the AMORnet only locate moving object and much faster than the time-consuming segmentation. To achieve this purpose, firstly we estimate the scene background through a motion estimation network, compared to the commonly used temporal histogram based approach, our background estimation method can better cope with infrequent aircraft movements in airports. Secondly, we use feature pyramids to perform regression and classification at multiple levels of feature abstractions. In this way, only moving objects are correctly recognized. Finally, experiments are conducted on an airport ground surveillance benchmark to verify the effectiveness of the proposed AMORnet.
This work was supported by the Project of Quzhou Municipal Government (2020D011), and National Science Foundation of China (U1733111, U19A2052).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Stauffer, C., Grimson, E.: Adaptive background mixture models for real-time tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, October 1999
Kim, K., Chalidabhongse, T., Harwood, D., Davis, L.: Background modeling and subtraction by codebook construction. In: IEEE International Conference on Image Processing, October 2004
Zivkovic, Z.: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn. Lett. 27(7), 773ā780 (2006)
Barnich, O., Droogenbroeck, M.V.: ViBe: a powerful random technique to estimate the background in video sequences. In: International Conference on Acoustics, Speech, and Signal Processing, April 2009
Lim, L., Keles, H.: Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding. Pattern Recogn. Lett. 112, 256ā262 (2018)
Liao, J., Guo, G., Yan, Y., Wang, H.: Multiscale cascaded scene-specific convolutional neural networks for background subtraction. In: Hong, R., Cheng, W.-H., Yamasaki, T., Wang, M., Ngo, C.-W. (eds.) PCM 2018. LNCS, vol. 11164, pp. 524ā533. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00776-8_48
Tezcan, M.O., Ishwar, P., Konrad, J.: BSUV-Net: a fully-convolutional neural network for background subtraction of unseen videos. In: IEEE Winter Conference on Applications of Computer Vision, March 2020
Bakkay, M., Rashwan, H., Salmane, H., Khoudour, L., Puig, D., Ruichek, Y.: BSCGAN: deep background subtraction with conditional generative adversarial networks. In: IEEE International Conference on Image Processing, October 2018
Uijlings, J.R., van de Sande, K.E., Gevers, T., Smeulders, A.W.: Selective search for object recognition. Int. J. Comput. Vis. 4(2), 154ā171 (2013)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, June 2014
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137ā1149 (2017)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition, June 2016
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21ā37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Lin, T.Y., Goyal, P., Girshick, R., He, K., DollĆ”r, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 318ā327 (2017)
Lin, T.Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition, July 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2022 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Zhang, Z., Zhang, X., Chen, D., Yu, H. (2022). Moving Object Recognition forĀ Airport Ground Surveillance Network. In: Calafate, C.T., Chen, X., Wu, Y. (eds) Mobile Networks and Management. MONAMI 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 418. Springer, Cham. https://doi.org/10.1007/978-3-030-94763-7_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-94763-7_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-94762-0
Online ISBN: 978-3-030-94763-7
eBook Packages: Computer ScienceComputer Science (R0)