Skip to main content
Log in

Guided Sonar-to-Satellite Translation

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Underwater navigation and localization are greatly enhanced by the use of acoustic images. However, such images are of difficult interpretation. Contrarily, aerial images are easier to interpret, but require Global Positioning System (GPS) sensors. Due to absorption phenomena, GPS sensors are unavailable in underwater environments. Thus, we propose a method to translate sonar images acquired underwater to an aerial counterpart. This process is called sonar-to-satellite translation. To perform the conversion, a U-Net based neural network is proposed, enhanced with state-of-the-art techniques, such as dilated convolutions and guided filters. Afterwards, our approach is validated on two datasets containing sonar images and their satellite analogue. Qualitative experimental results indicate that the proposed method can transfer features from acoustic images to aerial images, generating satellite images that are easier to interpret and visualize.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Availability of data and materials

TensorFlow implementation of the model and the ARACATI 2017 dataset are available for download at: https://github.com/giovgiac/son2sat.

References

  1. Deng, X., Zhu, Y., Newsam, S.: What is it like down there? Generating dense ground-level views and image features from overhead imagery using conditional generative adversarial networks. arXiv:180605129 (2018)

  2. Dos Santos, M.M., De Giacomo, G.G., Drews, P., Botelho, S.S.: Satellite and underwater sonar image matching using deep learning. In: 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), pp. 109–114, IEEE (2019)

  3. Dos Santos, M.M., De Giacomo, G.G., Drews, P.L., Botelho, S.S.: Underwater sonar and aerial images data fusion for robot localization. In: 2019 19th International Conference on Advanced Robotics (ICAR), pp. 578–583. IEEE (2019)

  4. Draper, N.R., Smith, H.: Applied Regression Analysis, vol. 326. Wiley (2014)

  5. Giacomo, G., Machado, M., Drews, P., Botelho, S.: Sonar-to-satellite translation using deep learning. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 454–459. IEEE (2018)

  6. Gonçalves, L.T., de Oliveira Gaya, J.F., Junior, P.J.L.D., da Costa Botelho, S.S.: Guidednet: single image dehazing using an end-to-end convolutional neural network. In: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 79–86. IEEE (2018)

  7. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6), 1397–1409 (2013)

    Article  Google Scholar 

  8. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

  9. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711. Springer (2016)

  10. Kim, D., Walter, M.R.: Satellite image-based localization via learned embeddings. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2073–2080. https://doi.org/10.1109/ICRA.2017.7989239 (2017)

  11. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv:14126980 (2014)

  12. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:151106434 (2015)

  13. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015)

  14. Silveira, L., Guth, F., Drews, P. Jr, Ballester, P., Machado, M., Codevilla, F., Duarte-Filho, N., Botelho, S.: An open-source bio-inspired solution to underwater slam. IFAC-PapersOnLine 48(2), 212–217 (2015)

    Article  Google Scholar 

  15. Steffens, C., Messias, L., Drews, P. Jr., Botelho, S.: Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing. Journal of Intelligent & Robotic Systems. https://doi.org/10.1007/s10846-019-01124-9(2020)

  16. Steffens, C.R., Messias, L.R.V., Drews, P. Jr, Botelho, S.S.D.C.: Contrast enhancement and image completion: A cnn based model to restore ill exposed images. In: 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), vol. 1, pp. 226–232. IEEE (2019)

  17. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Iccv, vol. 98, p. 2 (1998)

  18. Viswanathan, A., Pires, B.R., Huber, D.: Vision based robot localization by ground to satellite matching in gps-denied situations. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 192–198. https://doi.org/10.1109/IROS.2014.6942560 (2014)

  19. Wu, H., Zheng, S., Zhang, J., Huang, K.: Fast end-to-end trainable guided filter. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838–1847 (2018)

  20. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv:151107122(2015)

Download references

Acknowledgements

This research is partly supported by CNPq, CAPES and FAPERGS. We also would like to thank the colleagues from NAUTEC-FURG for helping with the experimental data and for productive discussions and meetings. Finally, we would like to thank NVIDIA for donating high-performance graphics cards. All authors are with NAUTEC, Intelligent Robotics and Automation Group, Universidade Federal do Rio Grande - FURG, Rio Grande - Brazil.

Author Contributions

  • Giovanni G. De Giacomo: implementation and execution of the Deep Learning experiments; writing of the manuscript.

  • Matheus M. dos Santos: development of the dataset and associated tools; helped writing the manuscript.

  • Paulo L. J. Drews-Jr: theoretical support on the idea; revising the manuscript.

  • Silvia S. C. Botelho: theoretical support on the idea; revising the manuscript.

Funding

This study was partly supported by the National Council for Scientific and Technological Development (CNPq) and Coordenacao de Aperfeiçoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001. This paper is also a contribution of the INCT-Mar COI funded by CNPq Grant Number 610012/2011-8.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni G. De Giacomo.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Giacomo, G.G., dos Santos, M.M., Drews, P.L.J. et al. Guided Sonar-to-Satellite Translation. J Intell Robot Syst 101, 46 (2021). https://doi.org/10.1007/s10846-021-01324-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01324-2

Keywords

Navigation