Abstract
Cross-modal image registration is an intractable problem in computer vision and pattern recognition. Inspired by that human gradually deepen to learn in the cognitive process, we present a novel method to automatically register images with different modes in this paper. Unlike most existing registrations that align images by single type of features or directly using multiple features, we employ the “regenerated” mechanism cooperated with a dynamic routing to adaptively detect features and match for different modal images. The geometry-based maximally stable extremal regions (MSER) are first implemented to fast detect non-overlapping regions as the primitive of feature regeneration, which are used to generate novel control-points using salient image disks (SIDs) operator embedded by a sub-pixel iteration. Then a dynamic routing is proposed to select suitable features and match images. Experimental results on optical and multi-sensor images show that our method has a better accuracy compared to state-of-the-art approaches.
This work was supported by the National Key R&D Program of China under Grant 2017YFB1002202, National Natural Science Foundation of China under Grant 61773316, Fundamental Research Funds for the Central Universities under Grant 3102017AX010, and the Open Research Fund of Key Laboratory of Spectral Imaging Technology, Chinese Academy of Sciences.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Barbara, Z., Jan, F.: Image registration methods: a survey. Image Vis. Comput. 11(21), 977–1000 (2003)
Collins, T., Bartoli, A.: Planar structure-from-motion with affine camera models: closed-form solutions, ambiguities and degeneracy analysis. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1237–1255 (2017)
Kim, S., Min, D., Kim, S., Sohn, K.: Feature augmentation for learning confidence measure in stereo matching. IEEE Trans. Image Process. 26(12), 6019–6033 (2017)
Li, J., Kaess, M., Eustice, R., Johnson-Roberson, M.: Pose-graph SLAM using forward-looking sonar. IEEE Robot. Autom. Lett. 3, 2330–2337 (2018)
Gong, M., Zhao, S., Jiao, L., Tian, D., Wang, S.: A novel coarse-to-fine scheme for automatic image registration based on SIFT and mutual information. IEEE Trans. Geosci. Remote Sens. 52(7), 4328–4338 (2014)
dos Santos, D.R., Basso, M.A., Khoshelham, K., de Oliveira, E., Pavan, N.L., Vosselman, G.: Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data. IEEE Geosci. Remote Sens. Lett. 13(2), 262–266 (2016)
Guislain, M., Digne, J., Chaine, R., Monnier, G.: Fine scale image registration in large-scale urban LIDAR point sets. Comput. Vis. Image Underst. 157, 90–102 (2017)
Rister, B., Horowitz, M.A., Rubin, D.L.: Fine scale image registration in large-scale urban LIDAR point sets. IEEE Trans. Image Process. 157(10), 4900–4910 (2017)
Du, W.L., Tian, X.L.: An automatic image registration evaluation model on dense feature points by pinhole camera simulation. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 2259–2263. IEEE, Beijing (2017)
Hsu, W.Y., Lee, Y.C.: Rat brain registration using improved speeded up robust features. J. Med. Biol. Eng. 37(1), 45–52 (2017)
Al-khafaji, S.L., Zhou, J., Zia, A., Liew, A.W.C.: Spectral-spatial scale invariant feature transform for hyperspectral images. IEEE Trans. Image Process. 27(2), 837–850 (2018)
Seregni, M., Paganelli, C., Summers, P., Bellomi, M., Baroni, G., Riboldi, M.: A hybrid image registration and matching framework for real-time motion tracking in MRI-guided radiotherapy. IEEE Trans. Biomed. Eng. 65(1), 131–139 (2018)
Mikolajczyk, K., Schmid, C.: Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)
Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)
Han, J., Pauwels, E.J., De Zeeuw, P.: Visible and infrared image registration in man-made environments employing hybrid visual features. Image Vis. Comput. 34(1), 42–51 (2013)
Palenichka, R.M., Zaremba, M.B.: Automatic extraction of control points for the registration of optical satellite and LiDAR images. IEEE Trans. Geosci. Remote Sens. 7, 2864–2879 (2010)
Zhang, Q., Wang, Y., Wang, L.: Registration of images with affine geometric distortion based on maximally stable extremal regions and phase congruency. Image Vis. Comput. 36, 23–39 (2015)
Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, J., Wang, Q., Li, X. (2018). A Regenerated Feature Extraction Method for Cross-modal Image Registration. In: Ren, J., et al. Advances in Brain Inspired Cognitive Systems. BICS 2018. Lecture Notes in Computer Science(), vol 10989. Springer, Cham. https://doi.org/10.1007/978-3-030-00563-4_43
Download citation
DOI: https://doi.org/10.1007/978-3-030-00563-4_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00562-7
Online ISBN: 978-3-030-00563-4
eBook Packages: Computer ScienceComputer Science (R0)