Skip to main content
Log in

Accurate estimation of feature points based on individual projective plane in video sequence

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The stability and quantity of feature matching in video sequence is one of the key issues for feature tracking and some relevant applications. The existing matching methods are based on feature detection, which is usually affected by illumination conditions, noise or occlusions, and this will directly influence matching results. In this paper, we propose an accurate prediction method for interest point estimation in video sequence by extracting the stable mapping for each undetected point in its suitable projective plane, which is based on coplanar feature points that have already been detected in adjacent frames. The proposed prediction method breaks the limitation of the previous approaches that largely rely on feature detection. Our experiments show that our method not only predicts features accurately, but also enriches the correspondences, which prolongs the track length of features.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Ackermann, H., Rosenhahn, B.: Trajectory reconstruction for affine structure-from-motion by global and local constraints. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2890–2897. IEEE (2009)

  2. Barath, D., Hajder, L.: A theory of point-wise homography estimation. Pattern Recognit. Lett. 94, 7–14 (2017)

    Article  Google Scholar 

  3. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. In: European Conference on Computer Vision, pp. 404–417. Springer (2006)

  4. Berg, A.C., Berg, T.L., Malik, J.: Shape matching and object recognition using low distortion correspondences. In: CVPR (1), pp. 26–33. Citeseer (2005)

  5. Calonder, M., Lepetit, V., Strecha, C., Fua, P.: Brief: binary robust independent elementary features. In: European Conference on Computer Vision, pp. 778–792. Springer (2010)

  6. Cho, M., Lee, J., Lee, K.M.: Feature correspondence and deformable object matching via agglomerative correspondence clustering. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1280–1287. IEEE (2009)

  7. Cho, M., Lee, K.M.: Progressive graph matching: making a move of graphs via probabilistic voting. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 398–405. IEEE (2012)

  8. DeTone, D., Malisiewicz, T., Rabinovich, A.: Superpoint: self-supervised interest point detection and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 224–236 (2018)

  9. Erlik Nowruzi, F., Laganiere, R., Japkowicz, N.: Homography estimation from image pairs with hierarchical convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 913–920 (2017)

  10. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  11. Geneva, P., Maley, J., Huang, G.: An efficient Schmidt-EKF for 3d visual-inertial slam. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

  12. Harris, C.G., Stephens, M., et al.: A combined corner and edge detector. In: Alvey Vision Conference, vol. 15, pp. 10–5244. Citeseer (1988)

  13. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  14. Hu, Y.T., Lin, Y.Y.: Progressive feature matching with alternate descriptor selection and correspondence enrichment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 346–354 (2016)

  15. Ilg, E., Saikia, T., Keuper, M., Brox, T.: Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 614–630 (2018)

  16. Jacobs, D.W.: Linear fitting with missing data for structure-from-motion. Comput. Vis. Image Underst. 82(1), 57–81 (2001)

    Article  Google Scholar 

  17. Jianbo, S., Tomasi, C.: Good features to track. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593–600 (1994)

  18. Jin, Y., Tao, L., Di, H., Rao, N.I., Xu, G.: Background modeling from a free-moving camera by multi-layer homography algorithm. In: 2008 15th IEEE International Conference on Image Processing, pp. 1572–1575. IEEE (2008)

  19. Leutenegger, S., Chli, M., Siegwart, R.: Brisk: Binary robust invariant scalable keypoints. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2548–2555. IEEE (2011)

  20. Li, J., Hu, Q., Ai, M.: 4fp-structure: a robust local region feature descriptor. Photogramm. Eng. Remote Sens. 83(12), 813–826 (2017)

    Article  Google Scholar 

  21. Li, J., Hu, Q., Ai, M., Zhong, R.: Robust feature matching via support-line voting and affine-invariant ratios. ISPRS J. Photogramm. Remote Sens. 132, 61–76 (2017)

    Article  Google Scholar 

  22. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: common objects in context. In: European Conference on Computer Vision, pp. 740–755. Springer (2014)

  23. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  24. Ma, J., Zhao, J., Tian, J., Yuille, A.L., Tu, Z.: Robust point matching via vector field consensus. IEEE Trans. Image Process. 23(4), 1706–1721 (2014)

    Article  MathSciNet  Google Scholar 

  25. Ma, J., Zhou, H., Zhao, J., Gao, Y., Jiang, J., Tian, J.: Robust feature matching for remote sensing image registration via locally linear transforming. IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015)

    Article  Google Scholar 

  26. Mueller, M., Smith, N., Ghanem, B.: A benchmark and simulator for UAV tracking. In: European Conference on Computer Vision, pp. 445–461. Springer (2016)

  27. Ono, Y., Trulls, E., Fua, P., Yi, K.M.: Lf-net: learning local features from images. In: Advances in Neural Information Processing Systems, pp. 6234–6244 (2018)

  28. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.R.: Orb: an efficient alternative to sift or surf. In: ICCV, vol. 11, p. 2. Citeseer (2011)

  29. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9(2), 137–154 (1992)

    Article  Google Scholar 

  30. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment a modern synthesis. In: International Workshop on Vision Algorithms, pp. 298–372. Springer (1999)

  31. Vedaldi, A., Fulkerson, B.: Vlfeat: an open and portable library of computer vision algorithms. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1469–1472. ACM (2010)

  32. Wang, C., Wang, L., Liu, L.: Progressive mode-seeking on graphs for sparse feature matching. In: European Conference on Computer Vision, pp. 788–802. Springer (2014)

  33. Weinmann, M., Weinmann, M., Hinz, S., Jutzi, B.: Fast and automatic image-based registration of TLS data. ISPRS J. Photogramm. Remote Sens. 66(6), S62–S70 (2011)

    Article  Google Scholar 

  34. Wu, C., et al.: Visualsfm: a visual structure from motion system (2011)

  35. Yan, Q., Yang, L., Liang, C., Liu, H., Hu, R., Xiao, C.: Geometrically based linear iterative clustering for quantitative feature correspondence. In: Computer Graphics Forum, vol. 35, pp. 1–10. Wiley Online Library (2016)

  36. Zhang, G., Liu, H., Dong, Z., Jia, J., Wong, T.T., Bao, H.: Efficient non-consecutive feature tracking for robust structure-from-motion. IEEE Trans. Image Process. 25(12), 5957–5970 (2016)

    Article  MathSciNet  Google Scholar 

  37. Zitova, B., Flusser, J.: Image registration methods: a survey. Image Vis. Comput. 21(11), 977–1000 (2003)

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded by the National Natural Science Foundation of China (NSFC) (41771427 and 41631174).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Huajun Liu, Shiran Tang or Dian Lei.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, H., Tang, S., Lei, D. et al. Accurate estimation of feature points based on individual projective plane in video sequence. Vis Comput 36, 2091–2103 (2020). https://doi.org/10.1007/s00371-020-01928-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01928-z

Keywords

Navigation