Skip to main content
Log in

Whole-pixel registration of non-rigid images using correspondences interpolation on sparse feature seeds

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Whole pixel registration of non-rigid images with high accuracy and efficiency is a challenging problem in computer vision. To address this issue, we propose a correspondence vector field (CVF) Interpolation approach based on sparse matching of feature seeds. First, we detect and match two types of feature seeds to improve the accuracy of the later dense CVF interpolation. The first type of feature seeds is to guarantee the accuracy of the motion boundary, while the second one is to achieve the uniform distribution of seeds, which is helpful to improve the effect of interpolation. Second, we regionally estimate the dense CVF using the proposed interpolation approach on this basis. At last, we realize the whole-pixel registration of non-rigid images to yield the image alignment. Different from the traditional CVF interpolation approaches based on optical flow field, ours is based on the sparse matching of feature seeds. Thus, it is not limited to the large displacements and tends to achieve the accurate matching of certain key points easily, which is critical to the final interpolation result. Qualitative and quantitative experimental results on several internationally used datasets demonstrate that our approach outperforms the state-of-the-art ones.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Collins, J.A., Weis, J.A., Heiselman, J.S., et al.: Improving registration robustness for image-guided liver surgery in a novel human-to-phantom data framework. IEEETrans. Med. Imaging 36(7), 1502–1510 (2017)

    Article  Google Scholar 

  2. Gong, L., Zhang, C., Duan, L., et al.: Nonrigid image registration using spatially region-weighted correlation ratio and GPU-acceleration. IEEE J BiomedHealth. 23(2), 766–778 (2018)

    Google Scholar 

  3. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  4. Bay, H., Tuytelaars, H., Van Gool, L.: SURF: speeded Up robust features. In: IEEE International Conference on Computer Vision, pp. 404–417, Graz, Austria (2006)

  5. Alcantarilla, P.F., Bartoli, A., Davison, A.J.: KAZE features. InL European Conference on Computer Vision, pp. 214–227, Florence, Italy (2012)

  6. Alcantarilla, P.F., Nuevo, J., Bartoli, A.: Fast explicit diffusion for accelerated features in non-linear scale SPA-CES. In: British Machine Vision Conference, pp. 13.1–13.11, Bristol, England (2013)

  7. Ke, Y., Sukthankar,R.: PCA-SIFT: a more distinctive representation for local image descriptors. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 506–513, Washington, DC, USA (2004)

  8. Rublee, E., Rabaud, V., Konolige, K., et al.: ORB: an efficient alternative to SIFT or SURF. In: IEEE International Conference on Computer Vision, pp. 2564–2571, Barcelona, Spain (2011)

  9. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2017)

    Article  Google Scholar 

  10. Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)

    Article  MathSciNet  Google Scholar 

  11. Cai, G.R., Jodoin, P.M., Li, S.Z., et al.: Perspective-sift: an efficient tool for low-altitude remote sensing image registration. Signal Process. 93(11), 3088–3110 (2013)

    Article  Google Scholar 

  12. Wang, C.: Unmanned aerial vehicle oblique image registration using an asift-based matching method. J. Appl. Remote Sens. (2018). https://doi.org/10.1117/1.JRS.12.025002

    Article  Google Scholar 

  13. Liu, Y., Yu, D., Chen, X., et al.: TOP-SIFT: the selected SIFT descriptor based on dictionary learning. Vis. Comput. 35(5), 667–677 (2019)

    Article  Google Scholar 

  14. Brox, T., Bruhn, A., Papenberg, N., et al: High accuracy optical flow estimation based on a theory for warping. In: European Conference on Computer Vision, pp. 25–36, Prague, Czech Republic (2004)

  15. Hu, Y., Song, R., Li, Y., et al.: Highly accurate optical flow estimation on superpixel tree’. Image Vis. Comput. 52, 167–177 (2016)

    Article  Google Scholar 

  16. Xu, L., Jia, J., Matsushita, Y.: Motion detail preserving optical flow estimation. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1744–1757 (2012)

    Article  Google Scholar 

  17. Brox, T., Malik, J.: Large displacement optical flow: descriptor matching in variational motion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 500–513 (2011)

    Article  Google Scholar 

  18. Rakêt, L.L., Roholm, L., Bruhn, A., et al.: Motion Compensated Frame Interpolation with a Symmetric Optical Flow Constraint. In: International Symposium on Visual Computing, pp. 447–457, Berlin, Heidelberg (2012)

  19. Niklaus, S., Liu, F.: Context-aware Synthesis for Video Frame Interpolation.abs/1803.10967 (2018)

  20. Weinzaepfel, P., Revaud, J., Harchaoui, Z., et al.: DeepFlow: large displacement optical flow with deep matching. In: IEEE International Conference onComputer Vision, pp. 1385–1392, Sydney, NSW, Australia (2013)

  21. Kroeger, T., Timofte, R., Dai, D., et al.: Fast optical flow using dense inverse search. In: European Conference on Computer Vision, pp. 471–488, Amsterdam, Netherlands (2016)

  22. Hu, Y., Song, R., Li, Y.: Efficient coarse-to-fine patch match for large displacement optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5704–5712, Las Vegas, NV, USA (2016)

  23. Chen, J., Cai, Z., Lai, J., et al.: Efficient segmentation-based patch match for large displacement optical flow estimation. IEEE Trans. Circuits Syst. Video Technol (2018). https://doi.org/10.1109/TCSVT.2018.2885246

    Article  Google Scholar 

  24. Xu, J., Ranftl, R., and Koltun, V.: Accurate optical flow via direct cost volume processing. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5807–5815, Honolulu, HI, USA (2017)

  25. Revaud, J., Weinzaepfel, P., Harchaoui, Z., et al.: EpicFlow:Edge-preserving interpolation of correspondences for optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1164–1172, Boston, MA, USA (2015)

  26. Geistert, J., Senst, T., and Sikora, T.: Robust local optical flow: Dense motion vector field interpolation. In: Picture Coding Symposium, pp. 1–5, Nuremberg, Germany (2016)

  27. Bailer, C., Taetz, B., and Stricker, D.: Flow fields: dense correspondence fields for highly accurate large displacement optical flow estimation. In: IEEE International Conference on Computer Vision, pp. 4015–4023, Santiago, Chile (2015)

  28. Schuster, R., Bailer, C., Wasenmüller, O., et al.: FlowFields++: accurate optical flow correspondences meet robust interpolation. In: IEEE International Conference on Image Processing, pp. 1463–1467, Athens, Greece (2018)

  29. Chen, Q., Koltun, V.: Full Flow: optical flow estimation by global optimization over regular grids. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4706–4714, Las Vegas, NV, USA (2016)

  30. Hu, Y., Li, Y., Song, R.: Robust Interpolation of correspondences for large displacement optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4791–4799, Honolulu, HI, USA (2017)

  31. Wulff, J., Black, M.J.: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 120–130, Boston, MA, USA (2015)

  32. Dosovitskiy, A., Fischer, P., Ilg, E., et al.: Flownet: learning optical flow with convolutional networks. In: IEEE International Conference on Computer Vision, pp. 2758–2766, Santiago, Chile (2015)

  33. Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2720–2729, Honolulu, HI, USA (2017)

  34. Sun, D., Yang, X., Liu, M.Y., et al.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943, Salt Lake City, UT, USA (2018)

  35. Hur, J., Roth, S.: Iterative residual refinement for joint optical flow and occlusion estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5754–5763, Seattle, USA (2020)

  36. Hui, T.W., Tang, X., Loy, C.C.: LiteFlowNet: a lightweight convolutional neural network for optical flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8981–8989, Salt Lake City, UT, USA (2018)

  37. Teed, Z., Deng, J.: RAFT: recurrent all-pairs field transforms for optical flow. abs/2003.12039 (2020)

  38. Hui, T.W., Tang, X., Loy, C.C.: A lightweight optical flow CNN-revisiting data fidelity and regularization. abs/1903.07414 (2020)

  39. Melekhov, I., Tiulpin, A., Sattler, T., et al.: Dgc-net: dense geometric correspondence network. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1034–1042, Hawaii, USA (2019)

  40. Truong, P., Danelljan, M., Timofte, R.: GLU-Net: global-local universal network for dense flow and correspondences. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6258–6268, Seattle, USA (2020)

  41. Dollár, P., Zitnick, C.L.: Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1558–1570 (2015)

    Article  Google Scholar 

  42. Mstafa, R.J., Younis, Y.M., Hussein, H.I.: A new video steganography scheme based on Shi-Tomasi corner detector. IEEE Access 8, 161825–161837 (2020)

    Article  Google Scholar 

  43. Leutenegger, S., Chli, M., Siegwart, R.: BRISK: binary robust invariant scalable key points. In: IEEE International Conference on Computer Vision, pp. 2548–2555, Barcelona, Spain (2011)

  44. Redding, N.J., Ohmer, J.F., Kelly J., et al.: Cross-matching via feature matching for camera handover with non-overlapping fields of view. In: Digital Image Computing: Techniques and Applications, pp. 343–350, Canberra, Australia (2008)

  45. Bian, J., Lin, W. Y., Matsushita, Y., et al.: Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4181–4190, Honolulu, HI, USA (2017)

  46. Wasserman, L.: All of Statistics: A Concise Course in Statistical Inference. Springer, Berlin (2010)

    MATH  Google Scholar 

  47. Wrobel, B.P.: Multiple View Geometry in Computer Vision. Cambridge University, Cambridge (2004)

    Google Scholar 

  48. He, K., Zhen, R., Yan, J., Ge, Y.: Single-image shadow removal using 3D intensity surface modeling. IEEE Trans. Image Process. 26(12), 6046–6060 (2017)

    Article  MathSciNet  Google Scholar 

  49. Sun, D., Roth, S., Black, M.J.: A quantitative analysis of current practices in optical flow estimation and the principles behind them. Int. J. Comput. Vis. 106(2), 115–137 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai He.

Ethics declarations

Conflict of interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service or company that could be construed as influencing the review of the manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, K., Zhao, Y., Liu, Z. et al. Whole-pixel registration of non-rigid images using correspondences interpolation on sparse feature seeds. Vis Comput 38, 1815–1832 (2022). https://doi.org/10.1007/s00371-021-02107-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02107-4

Keywords

Navigation