Skip to main content
Log in

A Competent Frame Work for Efficient Object Detection, Tracking and Classification

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Observation is the rising idea in the present innovation, as it assumes a key part in checking sharp exercises at the niches and corner of the world. Among which moving Object distinguishing and following by methods for PC vision systems is the significant part in reconnaissance. On the off chance that we consider moving object recognition in video investigation is the underlying stride among the different PC applications In this paper, we proposed robust video object detection and tracking technique. The proposed technique is divided into three phases namely detection phase, tracking phase and evaluation phase in which detection phase contains Foreground segmentation and Noise reduction. Mixture of Adaptive Gaussian model is proposed to achieve the efficient foreground segmentation. In addition to it the fuzzy morphological filter model is implemented for removing the noise present in the foreground segmented frames. Moving object tracking is achieved by the blob detection which comes under tracking phase. Finally, the evaluation phase has feature extraction and classification. Texture based and quality based features are extracted from the processed frames which is given for classification in weka. There are three classifiers such as J48, k-nearest neighbor and Multilayer perceptron are used. The performance of the proposed technique is measured through evaluation phase and is tabulated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Allili, M. S., Bouguila, N., & Ziou, D. (2007). A robust video foreground segmentation by using generalized gaussian mixture modeling. In Fourth Canadian conference on computer and robot vision (CRV’07) (pp. 503–509).

  2. Cheng, J., Yang, J., Zhou, Y., & Cui, Y. (2006). Flexible background mixture models for foreground segmentation. Image and Vision Computing, 24(5), 473–482.

    Article  Google Scholar 

  3. Stauffer, C., & Grimson, W. E. L. (2000). Learning patterns of activity using real-time tracking. IEEE Transaction on Pattern Analysis Machine Intelligence, 22(8), 747–757.

    Article  Google Scholar 

  4. McLachlan, G., & Peel, D. (2000). Finite mixture models. Wiley Series in Probability and Statistics.

  5. Do, M. N., & Vetterli, M. (2002). Wavelet-based texture retrieval using generalized gaussian density and kullback-leibler distance. IEEE Transaction on Image Processing, 11(2), 146–158.

    Article  MathSciNet  Google Scholar 

  6. Baccar, M., Gee, L. A., & Abidi, M. A. (1999). Reliable location and regression estimates with application to range image segmentation. Journal of Mathematical Imaging and Vision, 11, 195–205.

    Article  MathSciNet  MATH  Google Scholar 

  7. Joshi, R. L., & Fischer, T. R. (1995). Comparison of generalized gaussian and Laplacian modelling in DCT image coding. IEEE Signal Processing Letters, 2(5), 81–82.

    Article  Google Scholar 

  8. Sharifi, K., & Leon-Garcia, A. (1995). Estimation of shape parameter for generalized gaussian distibution in subband decomposition of video. IEEE Transactions on Circuits Systems and Video Technology, 5(1), 52–56.

    Article  Google Scholar 

  9. Allili, M. S., Bouguila, N., & Ziou, D. (2006). Finite generalized gaussian mixture modelling and applications to segmentation and tracking. Technical report, September 2006.

  10. Baxter, R. A., & Olivier, J. J. (2000). Finding overlapping components with MML. Statistics and Computing, 10(1), 516.

    Article  Google Scholar 

  11. Bouguila, N., & Ziou, D. (2006). Unsupervised Selection of a finite dirichlet mixture model: An MML-based approach. IEEE Transactions on Knowledge and Data Engineering, 18(8), 993–1009.

    Article  Google Scholar 

  12. Bouguila, N., & Ziou, D. (2006). Online clustering via finite mixtures of dirichlet and minimum message length. Engineering Applications of Artificial Intelligence, 19(4), 371–379.

    Article  Google Scholar 

  13. Law, M. H. C., Frigueiredo, M. A. T., & Jain, A. K. (2004). Simultanuous feature selection and clustering using mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 1154–1166.

    Article  Google Scholar 

  14. Box, G. E. P., & Tidwell, P. W. (1962). Transformation of independent variables. Technometrics, 4(4), 531–550.

    Article  MathSciNet  MATH  Google Scholar 

  15. Chavez-Garcia, R. O., & Aycard, O. (2016). Multiple sensor fusion and classification for moving object detection and tracking. IEEE Transactions on Intelligent Transportation Systems, 17(2), 525–534.

    Article  Google Scholar 

  16. Hwang, S., Kim, N., Choi, Y., Lee, S., & Kweon, I. S. (2016). Fast multiple objects detection and tracking fusing colour camera and 3d lidar for intelligent vehicles. In Proceedings of 13th international conference ubiquitous robots and ambient intelligence (URAI) (pp. 234–239).

  17. Chuang, M.-C., Hwang, J.-N., Ye, J.-H., Huang, S.-C., & Williams, K. (2016). Underwater Fish tracking for moving cameras based on deformable multiple kernels. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 99, 1–11.

    Google Scholar 

  18. Xiao, F., & Lee, Y. J. (2016). Track and segment: An iterative unsupervised approach for video object proposals. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR) (pp. 933–942).

  19. Tang, S., Andres, B., Andriluka, M., & Schiele, B. (2016). Multi-person tracking by multicut and deep matching. In Proceedings of European conference computer vision (pp. 100–111).

  20. Meier, D., Brockers, R., Matthies, L., Siegwart, R., Weiss, S. (2015). Detection and characterization of moving objects with aerial vehicles using inertial-optical flow. In Proceedings of international conference intelligent robots and systems (IROS) (pp. 2473–2480).

  21. Asvadi, A., Peixoto, P., & Nunes, U. (2015). Detection and tracking of moving objects using 2.5 d motion grids. In Proceedings of 18th international conference on intelligent transportation systems (pp. 788–793).

  22. Hou, L., Wan, W., Lee, K.-H., Hwang, J.-N., Okopal, G., & Pitton, J. (2015). Deformable multiple-kernel based human tracking using a moving camera. In Proceedings of international conference acoustics, speech and signal processing (ICASSP) (pp. 2249–2253).

  23. Hu, W.-C., Chen, C.-H., Chen, T.-Y., Huang, D.-Y., & Wu, Z.-C. (2015). Moving object detection and tracking from video captured by moving camera”. Journal of Visual Communication and Image Representation, 30, 164–180.

    Article  Google Scholar 

  24. Fragkiadaki, K., Arbeláez, P., Felsen, P., & Malik, J. (2015). Learning to segment moving objects in videos. In Proceedings of conference computer vision and pattern recognition (CVPR), pp. 4083–4090.

  25. Hu, W.-C., Chen, C.-H., Chen, C.-M., & Chen, T.-Y. (2014). Effective moving object detection from videos captured by a moving camera. In Proceedings of intelligent data analysis and its applications (pp. 343–353).

  26. Ferone, A., & Maddalena, L. (2014). Neural background subtraction for pantilt- zoom cameras. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(5), 571–579.

    Article  Google Scholar 

  27. Zamalieva, D., Yilmaz, A., & Davis, J. W. (2014). A multi-transformational model for background subtraction with moving cameras. In Proceedings of European conference on computer vision (pp. 803–817).

  28. Zhang, K., Zhang, L., Liu, Q., Zhang, D., & Yang, M.-H. (2014). Fast visual tracking via dense spatio-temporal context learning. In Proceedings of European conference on computer vision (pp. 127–141).

  29. Marković, I., Chaumette, F., & Petrović, I. (2014). Moving object detection, tracking and following using an omnidirectional camera on a mobile robot. In Proceedings of international conference robotics and automation (ICRA) (pp. 5630–5635).

  30. Arvanitidou, M. G., Tok, M., Glantz, A., Krutz, A., & Sikora, T. (2013). Motionbased object segmentation using hysteresis and bidirectional inter-frame change detection in sequences with moving camera. Signal Processing: Image Communication, 28(10), 1420–1434.

    Google Scholar 

  31. Choi, W., Pantofaru, C., & Savarese, S. (2013). A general framework for tracking multiple people from a moving camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7), 1577–1591.

    Article  Google Scholar 

  32. Lian, F.-L., Lin, Y.-C., Kuo, C.-T., & Jean, J.-H. (2013). Voting-based motion estimation for real-time video transmission in networked mobile camera systems. IEEE Transactions on Industrial Informatics, 9(1), 172–180.

    Article  Google Scholar 

  33. Zhou, X., Yang, C., & Yu, W. (2013). Moving object detection by detecting contiguous outliers in the low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(3), 597–610.

    Article  Google Scholar 

  34. Kim, S. W., Yun, K., Yi, K. M., Kim, S. J., & Choi, J. Y. (2013). Detection of moving objects with a moving camera using non-panoramic background model. Machine Vision and Applications, 24(5), 1015–1028.

    Article  Google Scholar 

  35. Khatoonabadi, S. H., & Bajic, I. V. (2013). Video object tracking in the compressed domain using spatio-temporal markov random fields. IEEE Transactions on Image Processing, 22(1), 300–313.

    Article  MathSciNet  MATH  Google Scholar 

  36. Lim, T., Han, B., & Han, J. H. (2012). Modeling and segmentation of floating foreground and background in videos. Pattern Recognition, 45(4), 1696–1706.

    Article  MATH  Google Scholar 

  37. Kratz, L., & Nishino, K. (2012). Tracking pedestrians using local spatiotemporal motion patterns in extremely crowded scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(5), 987–1002.

    Article  Google Scholar 

  38. Youlian, Z., Cheng, H., Lifang, Z., & Lingjiao, P. (2014). Mixed noise reduction method based on fuzzy morphological filtering. In 26th Chinese control and decision conference (CCDC) (pp. 2970–2973).

  39. Xutong, Zhou, & Pengfei, Shi. (1998). Fuzzy mathematical morphology based on triangle-norm logic. Journal of Shanghai Jiaotong University, 9, 73–77.

    Google Scholar 

  40. Fengsheng, X., Minjin, W., & Suen, C. Y. (1996). Theoretical aspects of fuzzy morphology. Journal of East China Normal University (Natural Science), 4, 38–46.

    MathSciNet  Google Scholar 

  41. Sinha, D., & Dougherty, E. R. (1992). Fuzzy mathematical morphology. Journal of Visual Communication and Image Representation, 3, 286–302.

    Article  Google Scholar 

  42. Chengbin, Zhang. (2010). Research of fuzzy morphological operator. Software Guide, 9(10), 23–25.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahalingam Thangaraj.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thangaraj, M., Monikavasagom, S. A Competent Frame Work for Efficient Object Detection, Tracking and Classification. Wireless Pers Commun 107, 939–957 (2019). https://doi.org/10.1007/s11277-019-06310-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-019-06310-4

Keywords

Navigation