Skip to main content

A Regenerated Feature Extraction Method for Cross-modal Image Registration

  • Conference paper
  • First Online:
Advances in Brain Inspired Cognitive Systems (BICS 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10989))

Included in the following conference series:

  • 2485 Accesses

Abstract

Cross-modal image registration is an intractable problem in computer vision and pattern recognition. Inspired by that human gradually deepen to learn in the cognitive process, we present a novel method to automatically register images with different modes in this paper. Unlike most existing registrations that align images by single type of features or directly using multiple features, we employ the “regenerated” mechanism cooperated with a dynamic routing to adaptively detect features and match for different modal images. The geometry-based maximally stable extremal regions (MSER) are first implemented to fast detect non-overlapping regions as the primitive of feature regeneration, which are used to generate novel control-points using salient image disks (SIDs) operator embedded by a sub-pixel iteration. Then a dynamic routing is proposed to select suitable features and match images. Experimental results on optical and multi-sensor images show that our method has a better accuracy compared to state-of-the-art approaches.

This work was supported by the National Key R&D Program of China under Grant 2017YFB1002202, National Natural Science Foundation of China under Grant 61773316, Fundamental Research Funds for the Central Universities under Grant 3102017AX010, and the Open Research Fund of Key Laboratory of Spectral Imaging Technology, Chinese Academy of Sciences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barbara, Z., Jan, F.: Image registration methods: a survey. Image Vis. Comput. 11(21), 977–1000 (2003)

    Google Scholar 

  2. Collins, T., Bartoli, A.: Planar structure-from-motion with affine camera models: closed-form solutions, ambiguities and degeneracy analysis. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1237–1255 (2017)

    Article  Google Scholar 

  3. Kim, S., Min, D., Kim, S., Sohn, K.: Feature augmentation for learning confidence measure in stereo matching. IEEE Trans. Image Process. 26(12), 6019–6033 (2017)

    Article  MathSciNet  Google Scholar 

  4. Li, J., Kaess, M., Eustice, R., Johnson-Roberson, M.: Pose-graph SLAM using forward-looking sonar. IEEE Robot. Autom. Lett. 3, 2330–2337 (2018)

    Article  Google Scholar 

  5. Gong, M., Zhao, S., Jiao, L., Tian, D., Wang, S.: A novel coarse-to-fine scheme for automatic image registration based on SIFT and mutual information. IEEE Trans. Geosci. Remote Sens. 52(7), 4328–4338 (2014)

    Article  Google Scholar 

  6. dos Santos, D.R., Basso, M.A., Khoshelham, K., de Oliveira, E., Pavan, N.L., Vosselman, G.: Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data. IEEE Geosci. Remote Sens. Lett. 13(2), 262–266 (2016)

    Article  Google Scholar 

  7. Guislain, M., Digne, J., Chaine, R., Monnier, G.: Fine scale image registration in large-scale urban LIDAR point sets. Comput. Vis. Image Underst. 157, 90–102 (2017)

    Article  Google Scholar 

  8. Rister, B., Horowitz, M.A., Rubin, D.L.: Fine scale image registration in large-scale urban LIDAR point sets. IEEE Trans. Image Process. 157(10), 4900–4910 (2017)

    Article  Google Scholar 

  9. Du, W.L., Tian, X.L.: An automatic image registration evaluation model on dense feature points by pinhole camera simulation. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 2259–2263. IEEE, Beijing (2017)

    Google Scholar 

  10. Hsu, W.Y., Lee, Y.C.: Rat brain registration using improved speeded up robust features. J. Med. Biol. Eng. 37(1), 45–52 (2017)

    Article  Google Scholar 

  11. Al-khafaji, S.L., Zhou, J., Zia, A., Liew, A.W.C.: Spectral-spatial scale invariant feature transform for hyperspectral images. IEEE Trans. Image Process. 27(2), 837–850 (2018)

    Article  MathSciNet  Google Scholar 

  12. Seregni, M., Paganelli, C., Summers, P., Bellomi, M., Baroni, G., Riboldi, M.: A hybrid image registration and matching framework for real-time motion tracking in MRI-guided radiotherapy. IEEE Trans. Biomed. Eng. 65(1), 131–139 (2018)

    Article  Google Scholar 

  13. Mikolajczyk, K., Schmid, C.: Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)

    Article  Google Scholar 

  14. Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)

    Article  Google Scholar 

  15. Han, J., Pauwels, E.J., De Zeeuw, P.: Visible and infrared image registration in man-made environments employing hybrid visual features. Image Vis. Comput. 34(1), 42–51 (2013)

    Google Scholar 

  16. Palenichka, R.M., Zaremba, M.B.: Automatic extraction of control points for the registration of optical satellite and LiDAR images. IEEE Trans. Geosci. Remote Sens. 7, 2864–2879 (2010)

    Article  Google Scholar 

  17. Zhang, Q., Wang, Y., Wang, L.: Registration of images with affine geometric distortion based on maximally stable extremal regions and phase congruency. Image Vis. Comput. 36, 23–39 (2015)

    Article  Google Scholar 

  18. Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, J., Wang, Q., Li, X. (2018). A Regenerated Feature Extraction Method for Cross-modal Image Registration. In: Ren, J., et al. Advances in Brain Inspired Cognitive Systems. BICS 2018. Lecture Notes in Computer Science(), vol 10989. Springer, Cham. https://doi.org/10.1007/978-3-030-00563-4_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00563-4_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00562-7

  • Online ISBN: 978-3-030-00563-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics