Skip to main content
Log in

Robust graticule intersection localization for rotated topographic maps

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Graticule intersections in topographic maps are usually considered to be suitable candidates for reference points in geometric calibration because the corresponding geographical information can be directly retrieved from the maps or derived from sheet numbers. Previous research on automatic corner point detection relies on the assumption that scanned maps are not rotated, which is rarely practical. To address this issue, a semantic segmentation approach for accurate graticule intersection localization is proposed in this paper. A fully convolutional network is utilized to provide pixel level information about the locations of specific rectangular objects at the corners of map frames by dense classification in regions of interest. The globally optimal segmentation of the foreground rotated object is obtained by the graph cuts technique. The bounding box of the rotated object is further retrieved with the minimum-area enclosing rectangle algorithm. Finally, the coordinates of graticule intersections are derived in accordance with the positions of the sliding windows and the relative locations of the vertices of the objects. The proposed method reduces the average localization error to 1.5 pixels, which is 32.4% lower than that of the baseline model. The standard deviation of localization error is 0.91 pixels, which aligns with an average of 52% improvements to the baseline model in the location variance metric.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Dong, L., Zheng, F., Chang, H., Yan, Q.: Corner points localization in electronic topographic maps with deep neural networks. Earth Sci. Inform. 11, 47–57 (2018). https://doi.org/10.1007/s12145-017-0317-3

    Article  Google Scholar 

  2. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004). https://doi.org/10.1109/TPAMI.2004.60

    Article  MATH  Google Scholar 

  3. Toussaint, G.: Solving geometric problems with the rotating calipers. In: Proceedings IEEE Melecon (1983)

  4. Cheng, G., Han, J.: A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 117, 11–28 (2016). https://doi.org/10.1016/j.isprsjprs.2016.03.014

    Article  Google Scholar 

  5. Han, J., Zhang, D., Cheng, G., Liu, N., Xu, D.: Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Process. Mag. 35, 84–100 (2018). https://doi.org/10.1109/MSP.2017.2749125

    Article  Google Scholar 

  6. Kampffmeyer, M., Salberg, A.-B., Jenssen, R.: Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016). https://doi.org/10.1109/CVPRW.2016.90

  7. Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528 (2016). https://doi.org/10.1109/ICCV.2015.178

  8. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2017). https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  9. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651 (2017). https://doi.org/10.1109/TPAMI.2016.2572683

    Article  Google Scholar 

  10. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2018). https://doi.org/10.1109/TPAMI.2017.2699184

    Article  Google Scholar 

  11. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017). https://doi.org/10.1109/TPAMI.2016.2644615

    Article  Google Scholar 

  12. Lu, X., Wang, B., Zheng, X., Li, X.: Exploring models and data for remote sensing image caption generation. IEEE Trans. Geosci. Remote Sens. 56, 2183–2195 (2018). https://doi.org/10.1109/TGRS.2017.2776321

    Article  Google Scholar 

  13. Zhang, W., Lu, X., Li, X.: A coarse-to-fine semi-supervised change detection for multispectral images. IEEE Trans. Geosci. Remote Sens. 56, 3587–3599 (2018). https://doi.org/10.1109/TGRS.2018.2802785

    Article  Google Scholar 

  14. Lu, X., Li, X., Mou, L.: Semi-supervised multitask learning for scene recognition. IEEE Trans. Cybern. 45, 1967–1976 (2015). https://doi.org/10.1109/TCYB.2014.2362959

    Article  Google Scholar 

  15. Lu, X., Zhang, W., Li, X.: A hybrid sparsity and distance-based discrimination detector for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 56, 1704–1717 (2018). https://doi.org/10.1109/TGRS.2017.2767068

    Article  Google Scholar 

  16. Lu, X., Wu, H., Yuan, Y.: Double constrained NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 52, 2746–2758 (2014). https://doi.org/10.1109/TGRS.2013.2265322

    Article  Google Scholar 

  17. Zhang, D., Meng, D., Han, J.: Co-Saliency detection via a self-paced multiple-instance learning framework. IEEE Trans. Pattern Anal. Mach. Intell. 39, 865–878 (2017). https://doi.org/10.1109/TPAMI.2016.2567393

    Article  Google Scholar 

  18. Wang, W., Shen, J., Shao, L.: Video salient object detection via fully convolutional networks. IEEE Trans. Image Process. 27, 38–49 (2018). https://doi.org/10.1109/TIP.2017.2754941

    Article  MathSciNet  MATH  Google Scholar 

  19. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014). https://doi.org/10.1109/CVPR.2014.81

  20. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37 (2016). https://doi.org/10.1007/978-3-319-46448-0_2

  21. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016). https://doi.org/10.1007/978-3-319-46448-0_2

  22. Gidaris, S., Komodakis, N.: Object detection via a multi-region and semantic segmentation-aware CNN model. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1134–1142 (2015). https://doi.org/10.1109/ICCV.2015.135

  23. Gidaris, S., Komodakis, N.: Attend refine repeat: active box proposal generation via in-out localization. In: Proceedings of the British Machine Vision Conference (BMVC) (2016). https://dx.doi.org/10.5244/C.30.90

  24. Uijlings, J.R.R., Van De Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. Int. J. Comput. Vis. 104, 154–171 (2013). https://doi.org/10.1007/s11263-013-0620-5

    Article  Google Scholar 

  25. Wang, W., Shen, J.: Deep visual attention prediction. IEEE Trans. Image Process. 27, 2368–2378 (2018). https://doi.org/10.1109/TIP.2017.2787612

    Article  MathSciNet  MATH  Google Scholar 

  26. Wang, W., Shen, J., Ling, H.: A Deep network solution for attention and aesthetics aware photo cropping. IEEE Trans. Pattern Anal. Mach. Intell. (2018). https://doi.org/10.1109/TPAMI.2018.2840724

    Google Scholar 

  27. Cheng, G., Han, J., Zhou, P., Xu, D.: Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection. IEEE Trans. Image Process. 28, 265–278 (2019). https://doi.org/10.1109/TIP.2018.2867198

    Article  MathSciNet  MATH  Google Scholar 

  28. Pinheiro, P.O., Collobert, R., Dollar, P.: Learning to segment object candidates. In: Advances in Neural Information Processing Systems, pp. 1990–1998 (2015)

  29. Kaiming, H., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: IEEE International Conference on Computer Vision (ICCV) (2017). https://doi.org/10.1109/ICCV.2017.322

  30. Li, Y., Qi, H., Dai, J., Ji, X., Wei, Y.: Fully convolutional instance-aware semantic segmentation. In: CVPR (2017). https://doi.org/10.1109/CVPR.2017.472

  31. Shen, J., Peng, J., Shao, L.: Submodular trajectories for better motion segmentation in videos. IEEE Trans. Image Process. 27, 2688–2700 (2018). https://doi.org/10.1109/TIP.2018.2795740

    Article  MathSciNet  MATH  Google Scholar 

  32. Wang, W., Shen, J., Porikli, F., Yang, R.: Semi-supervised video object segmentation with super-trajectories. IEEE Trans. Pattern Anal. Mach. Intell. (2018). https://doi.org/10.1109/TPAMI.2018.2819173

    Google Scholar 

  33. Guo, F., Wang, W., Shen, J., Shao, L., Yang, J., Tao, D., Tang, Y.Y.: Video saliency detection using object proposals. IEEE Trans. Cybern. 48, 3159–3170 (2018). https://doi.org/10.1109/TCYB.2017.2761361

    Article  Google Scholar 

  34. Wang, W., Shen, J., Yang, R., Porikli, F.: Saliency-aware video object segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 40, 20–33 (2018). https://doi.org/10.1109/TPAMI.2017.2662005

    Article  Google Scholar 

  35. Shen, J., Hao, X., Liang, Z., Liu, Y., Wang, W., Shao, L.: Real-time superpixel segmentation by DBSCAN clustering algorithm. IEEE Trans. Image Process. 25, 5933–5942 (2016). https://doi.org/10.1109/TIP.2016.2616302

    Article  MathSciNet  MATH  Google Scholar 

  36. Dong, X., Shen, J., Shao, L., Gool, L.Van: Sub-Markov random walk for image segmentation. IEEE Trans. Image Process. 25, 516–527 (2016). https://doi.org/10.1109/TIP.2015.2505184

    Article  MathSciNet  MATH  Google Scholar 

  37. Shen, J., Peng, J., Dong, X., Shao, L., Porikli, F.: Higher order energies for image segmentation. IEEE Trans. Image Process. 26, 4911–4922 (2017). https://doi.org/10.1109/TIP.2017.2722691

    Article  MathSciNet  MATH  Google Scholar 

  38. Lafferty, J., McCallum, A., Pereira, F.C.N.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: ICML ‘01 Proceedings of the Eighteenth International Conference on Machine Learning, pp. 282–289 (2001)

  39. Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with gaussian edge potentials. In: Advances in Neural Information Processing Systems, pp. 109–117 (2011)

  40. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016). https://doi.org/10.1109/CVPR.2016.90

  41. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: ECCV (2016). https://doi.org/10.1007/978-3-319-46493-0_38

  42. Liu, F., Shen, C., Lin, G., Reid, I.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2024–2039 (2016). https://doi.org/10.1109/TPAMI.2015.2505283

    Article  Google Scholar 

  43. SMORMS3 (2018). http://sifter.org/simon/journal/20150420.html. Accessed 10 Apr 2018

  44. Tokui, S., Oono, K., Hido, S., Clayton, J.: Chainer: a next-generation open source framework for deep learning. In: Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS) (2015)

  45. Sloth (2018). https://github.com/cvhciKIT/sloth. Accessed 30 June 2018

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (Nos. 61563053 and 31460625).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luan Dong.

Additional information

Publisher's Note

Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, L., Yan, Q. & Zheng, F. Robust graticule intersection localization for rotated topographic maps. Machine Vision and Applications 30, 737–747 (2019). https://doi.org/10.1007/s00138-019-01025-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-019-01025-9

Keywords

Navigation