Skip to main content
Log in

Image-matching framework based on region partitioning for target image location

  • Published:
Telecommunication Systems Aims and scope Submit manuscript

Abstract

The target-location problems of observation and combat-integrated UAVs utilized in battles makes image matching challenging and of vital significance. This paper presents a framework of image matching based on region partitioning for target-image location, working on complex simulated aerial images consisting of, for example, scale-changing, rotation-changing, blurred, and occlusion images. Originally, an image-evaluation approach based on a weighted-orientation histogram was proposed to judge whether the image is an image with good texture or a textureless image. Two approaches based on layered architecture are employed for images with good texture and textureless images. In these two approaches, an improved SIFT image-matching algorithm incorporating detected Harris corners into the keypoint set is suggested, and Bhattacharyya distance based on an orientation histogram was employed to select the best result among different region pairs. Experiment results illustrated that the image-matching approach based on image segmentation has a much higher rate of 42.04 when compared to the traditional approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Shi, Y., & Li, R. (2011). Coordinated stability based trajectory tracking control law design of reconnaissance and attack UAV. In Proceedings of the 30th Chinese control conference (pp. 3724–3729).

  2. Zhang, C., Zhan, F., & Deng, H. (2008). Reconnaissance and strike integrated UAV. Aeronautical Manufacturing Technology, 9, 34–37. https://doi.org/10.3969/j.issn.1671-833X.2008.09.003.

    Article  Google Scholar 

  3. Liu, H., Darabi, H., Banerjee, P., & Liu, J. (2007). Survey of wireless indoor positioning techniques and systems. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(6), 1067–1080. https://doi.org/10.1109/TSMCC.2007.905750.

    Article  Google Scholar 

  4. Weiss, S., Achtelik, M. W., Lynen, S., Chli, M., & Siegwart, R. (2012). Real-time onboard visual-inertial state estimation and self-calibration of mavs in unknown environments. In 2012 IEEE international conference on robotics and automation (pp. 957–964). https://doi.org/10.1109/ICRA.2012.6225147.

  5. Lee, S., Shim, T., Kim, S., Park, J., Hong, K., & Bang, H. (2018) Vision-based autonomous landing of a multi-copter unmanned aerial vehicle using reinforcement learning (pp. 108–114). https://doi.org/10.1109/ICUAS.2018.8453315.

  6. Bi, Y., & Duan, H. (2013). Implementation of autonomous visual tracking and landing for a low-cost quadrotor. Optik - International Journal for Light and Electron Optics, 124(18), 3296–3300.

    Article  Google Scholar 

  7. Hel-Or, Y., Hel-Or, H., & David, E. (2014). Matching by tone mapping: Photometric invariant template matching. IEEE Transactions on Pattern Analysis & Machine Intelligence, 36(2), 317–330.

    Article  Google Scholar 

  8. Hel-Or, Y., Hel-Or, H., & David, E. (2011). Fast template matching in non-linear tone-mapped images. In IEEE international conference on computer vision, ICCV 2011, Barcelona, Spain, November 6–13, 2011.

  9. Cole-Rhodes, A. A., Johnson, K. L., Lemoigne, J., & Zavorin, I. (2003). Multiresolution registration of remote sensing imagery by optimization of mutual information using a stochastic gradient. IEEE Transactions on Image Processing, 12(12), 1495–1511.

    Article  Google Scholar 

  10. Ma, J., Jiang, J., Zhou, H., Ji, Z., & Guo, X. (2018). Guided locality preserving feature matching for remote sensing image registration. IEEE Transactions on Geoscience & Remote Sensing, PP(99), 1–13.

    Google Scholar 

  11. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110. https://doi.org/10.1023/b:visi.0000029664.99615.94.

    Article  Google Scholar 

  12. Saleem, S., Bais, A., Sablatnig, R., et al. (2017). Feature points for multisensor images. Computers & Electrical Engineering, 62, 511–523. https://doi.org/10.1016/j.compeleceng.2017.04.032.

    Article  Google Scholar 

  13. Lyu, C., & Jiang, J. (2017). Remote sensing image registration with line segments and their intersections. Remote Sensing, 9(5), 439–448. https://doi.org/10.3390/rs9050439.

    Article  Google Scholar 

  14. Bai, L., Jing, X., Yue, J., et al. (2013). A weighted color image stereo matching algorithm with edge-based adaptive window. In International conference on image & graphics, 2013 (pp. 539–544). https://doi.org/10.1109/ICIG.2013.114.

  15. Kim, J., & Grauman, K. (2010). Asymmetric region-to-image matching for comparing images with generic object categories. Computer Vision & Pattern Recognition, 2007, 2344–2351. https://doi.org/10.1109/CVPR.2010.5539923.

    Article  Google Scholar 

  16. Bay, H., Ess, A., Tuytelaars, T., & Goolab, L. V. (2007). Surf: Speed-up robust features. Computer Vision and Image Understanding, 110(3), 346–359. https://doi.org/10.1016/j.cviu.2007.09.014.

    Article  Google Scholar 

  17. Leutenegger, S., Chli, M., & Siegwart, R. Y. (2011). Brisk: Binary robust invariant scalable keypoints. In IEEE international conference on computer vision, ICCV 2011, Barcelona, Spain, November 6–13, 2011.

  18. Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2012). Orb: An efficient alternative to sift or surf. In 2011 international conference on computer vision.

  19. Calonder, M., Lepetit, V., Strecha, C., Fua, P. (2010). Brief: Binary robust independent elementary features. In: Computer vision—ECCV 2010, 11th European conference on computer vision, Heraklion, Crete, Greece, September 5–11, 2010, proceedings, part IV.

  20. Strecha, C., Bronstein, A. M., Bronstein, M. M., & Fua, P. (2012). LDAHash: Improved matching with smaller descriptors. IEEE Transactions on Pattern Analysis & Machine Intelligence, 34(1), 66–78.

    Article  Google Scholar 

  21. Jie, C., Shiguang, S., & Chu, H. (2010). WLD: A robust local image descriptor. IEEE Transactions on Pattern Analysis & Machine Intelligence, 32(9), 1705–1720.

    Article  Google Scholar 

  22. Liu, F., Tang, Z., & Tang, J. (2013). WLBP: Weber local binary pattern for local image description. Neurocomputing, 120(23), 325–335.

    Article  Google Scholar 

  23. Weinzaepfel, P., Revaud, J., Harchaoui, Z., & Schmid, C. (2014). Deepflow: Large displacement optical flow with deep matching. In IEEE international conference on computer vision, ICCV2014 (pp. 1385–1392). https://doi.org/10.1109/ICCV.2013.175.

  24. Yang, X., Kwitt, R., Styner, M., & Niethammer, M. (2017). Quicksilver: Fast predictive image registration—A deep learning approach. Neuroimage, 158, 378. https://doi.org/10.1016/j.neuroimage.2017.07.008.

    Article  Google Scholar 

  25. Souza, F. D., Sarkar, S., & Camara-Chavez, G. (2016). Building semantic understanding beyond deep learning from sound and vision. In International conference on pattern recognition, ICPR2016 (pp. 2097–2102). https://doi.org/10.1109/ICPR.2016.7899945.

  26. Zhu, Y., Zhao, Y., & Zhu, S. C. (2015). Understanding tools: Task-oriented object modeling, learning and recognition. Computer Vision and Pattern Recognition, CVPR2015, 2855–2864. https://doi.org/10.1109/CVPR.2015.7298903.

    Article  Google Scholar 

  27. Tron, R., Zhou, X., Esteves, C., & Daniilidis, K. (2017). Fast multi-image matching via density-based clustering. In IEEE international conference on computer vision, ICCV2017 (pp. 4077–4086). https://doi.org/10.1109/ICCV.2017.437.

  28. Pun, C. M., Yan, C. P., & Yuan, X. C. (2017). Image alignment based multi-region matching for object-level tampering detection. In IEEE international conference on computer vision ICCV, 2017 (Vol. 99, pp. 1–1). https://doi.org/10.1109/TIFS.2016.2615272.

  29. Jeng-Shyang, P., Lingping, K., Tien-Wen, S., Pei-Wei, T., & Václav, S. (2018). \(\alpha \)-fraction first strategy for hierarchical model in wireless sensor networks. Journal of Internet Technology, 19(6), 1717–1726. https://doi.org/10.3966/160792642018111906009.

    Article  Google Scholar 

  30. Jeng-Shyang, P., Lingping, K., Tien-Wen, S., Pei-Wei, T., & Václav, S. (2018). A clustering scheme for wireless sensor networks based on genetic algorithm and dominating set. Journal of Internet Technology, 19(4), 1111–1118. https://doi.org/10.1145/1821748.1821863.

    Article  Google Scholar 

  31. Jeng-Shyang, P., Lee, C. Y., Sghaier, A., Zeghid, M., & Xie, J. (2019). Novel systolization of subquadratic space complexity multipliers based on toeplitz matrix-vector product approach. IEEE Transactions on Very Large Scale Integration Systems,. https://doi.org/10.1109/TVLSI.2019.2903289.

    Article  Google Scholar 

  32. Zhou, J., Xin, L., & Zhang, D. (2003). Scale-orientation histogram for texture image retrieval. Pattern Recognition, 36(4), 1061–1063. https://doi.org/10.1016/s0031-3203(02)00264-9.

    Article  Google Scholar 

  33. Comaniciu, D., & Meer, P. (2002). Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5), 603–619. https://doi.org/10.1109/34.1000236.

    Article  Google Scholar 

  34. Carreira-Perpiñán, Á. (2006). Acceleration strategies for Gaussian mean-shift image segmentation. In IEEE computer society conference on computer vision & pattern recognition, CVPR2006 (pp. 790–799). https://doi.org/10.1109/CVPR.2006.44.

  35. Lien, G. D., Hardaker, J. B., & Richardson, J. W. (2006). Simulating multivariate distributions with sparse data: A kernal density smoothing procedure. General Information, 2006 (pp. 1–16). http://ageconsearch.umn.edu/bitstream/25449/1/pp060805.pdf.

  36. Jianfang, D., Qin, Q., & Zimei, T. (2018). Robust image matching based on the information of sift. Optik, 171, 850–861. https://doi.org/10.1016/j.ijleo.2018.06.094.

    Article  Google Scholar 

  37. Lowe, D. G. (1999). Object recognition from local scale-invariant features. In IEEE international conference on computer vision, 1999 (Vol. 3, pp. 1150–1158). https://doi.org/10.1109/ICCV.1999.790410.

  38. Brown, M. (2002). Invariant features from interest point groups. In British machine vision conference, BMVC2002 (pp. 656–665). https://pdfs.semanticscholar.org/4585/cbe8242938919ff7a73352efd662ed554400.pdf

  39. Peterson, L. (2009). K-nearest neighbor. Scholarpedia, 4(2), 1883. https://doi.org/10.4249/scholarpedia.1883.

    Article  Google Scholar 

  40. Mezirow, J. (2014). Perspective transformation. Adult Education Quarterly, 28(28), 100–110. https://doi.org/10.1007/978-0-387-31439-6.

    Article  Google Scholar 

  41. Liu, X., Li, J. B., & Pan, J. S. (2019). Feature point matching based on distinct wavelength phase congruency and log-gabor filters in infrared and visible images. Sensors, 19, 4244. https://doi.org/10.3390/s19194244.

    Article  Google Scholar 

  42. Harris, C., & Stephens, M. (1988). A combined corner and edge detector. In Proceedings of fourth Alvey vision conference (Vol. 53, pp. 147–151). https://doi.org/10.5244/C.2.23.

  43. Tsai, C. H., & Lin, Y. C. (2017). An accelerated image matching technique for UAV orthoimage registration. ISPRS Journal of Photogrammetry & Remote Sensing, 128, 130–145. https://doi.org/10.1016/j.isprsjprs.2017.03.017.

    Article  Google Scholar 

Download references

Acknowledgements

First and formost, I would like to show my deepest gratitude to my supervisor, Jun-Bao Li, a respectable, responsible and resourceful scholar, who has provided me with valuable guidance in every stage of the writing of this paper. I shall extend my thanks to Jeng-Shyang Pan for all his kindness and help. I would also like to thank Shuo Wang and Xudong Lv for experimental analysis, Shuanglong Cui for the critical revision of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-Bao Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by National Science Foundation of China under Grant Nos. 61671170 and 61872085, Science and Technology Foundation of National Defense Key Laboratory of Science and Technology on Parallel and Distributed Processing Laboratory (PDL) under Grant No. 6142110180406, Science and Technology Foundation of ATR National Defense Key Laboratory under Grant No. 6142503180402, China Academy of Space Technology (CAST) Innovation Fund under Grant No. 2018CAST33, Joint Fund of China Electronics Technology Group Corporation and Equipment Pre-Research under Grant No. 6141B08231109, Jiamusi University Young Innovative Talents Training Program No. 22Zq201506 and Excellent discipline team project of Jiamusi University No. JDXKTD-2019008.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, X., Li, JB., Pan, JS. et al. Image-matching framework based on region partitioning for target image location. Telecommun Syst 74, 269–286 (2020). https://doi.org/10.1007/s11235-020-00657-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11235-020-00657-x

Keywords

Navigation