Skip to main content

Weakly-Supervised Man-Made Object Recognition in Underwater Optimal Image Through Deep Domain Adaptation

  • Conference paper
  • First Online:
Book cover Neural Information Processing (ICONIP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11305))

Included in the following conference series:

  • 2530 Accesses

Abstract

Underwater man-made object recognition in optical images plays important roles in both image processing and oceanic engineering. Deep learning methods have received impressive performances in many recognition tasks in in-air images, however, they will be limited in the proposed task since it is tough to collect and annotate sufficient data to train the networks. Considered that large-scale in-air images of man-made objects are much easier to acquire in the applications, one can train a network on in-air images and directly applying it on underwater images. However, the distribution mismatch between in-air and underwater images will lead to a significant performance drop. In this work, we propose an end-to-end weakly-supervised framework to recognize underwater man-made objects with large-scale labeled in-air images and sparsely labeled underwater images. And a novel two-level feature alignment approach, is introduced to a typical deep domain adaptation network, in order to tackle the domain shift between data generated from two modalities. We test our methods on our newly simulated datasets containing two image domains, and achieve an improvement of approximately 10 to 20 % points in average accuracy compared to the best-performing baselines.

The work is supported in part by National Natural Science Foundation of China under grants of 81671766, 61571382, 61571005, 81301278, 61172179 and 61103121, in part by Natural Science Foundation of Guangdong Province under grant 2015A030313007, in part by the Fundamental Research Funds for the Central Universities under Grants 2072018005920720160075, in part of the Natural Science Foundation of Fujian Province of China (No. 2017J01126).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jaffe, J.S.: Underwater optical imaging: the past, the present, and the prospects. IEEE J. Oceanic Eng. 40(3), 683–700 (2015)

    Article  Google Scholar 

  2. Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., Bekaert, P.: Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27(1), 379–393 (2018)

    Article  MathSciNet  Google Scholar 

  3. Hussain, S.S., Zaidi, S.S.H.: Underwater man-made object prediction using line detection technique. In: 2014 6th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pp. 1–6. IEEE (2014)

    Google Scholar 

  4. Hou, G.-J., Luan, X., Song, D.-L., Ma, X.-Y.: Underwater man-made object recognition on the basis of color and shape features. J. Coast. Res. 32(5), 1135–1141 (2015)

    Google Scholar 

  5. Olmos, A., Trucco, E.: Detecting man-made objects in unconstrained subsea videos. In: BMVC, pp. 1–10 (2002)

    Google Scholar 

  6. Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  7. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  8. Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2066–2073. IEEE (2012)

    Google Scholar 

  9. Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning, pp. 97–105 (2015)

    Google Scholar 

  10. Nguyen, R.M.H., Kim, S.J., Brown, M.S.: Illuminant aware gamut-based color transfer. In: Computer Graphics Forum. Wiley Online Library, vol. 33, pp. 319–328 (2014)

    Google Scholar 

  11. Schechner, Y.Y., Karpel, N.: Clear underwater vision. In: 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, p. I. IEEE (2003)

    Google Scholar 

  12. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  13. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189 (2015)

    Google Scholar 

  14. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems, pp. 136–144 (2016)

    Google Scholar 

  15. Li, Y., Lu, H., Li, J., Li, X., Li, Y., Serikawa, S.: Underwater image de-scattering and classification by deep neural network. Comput. Electr. Eng. 54, 68–77 (2016)

    Article  Google Scholar 

  16. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 1–35 (2016)

    MathSciNet  MATH  Google Scholar 

  17. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning, pp. 2208–2217 (2017)

    Google Scholar 

  18. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16

    Chapter  Google Scholar 

  19. Srividhya, K., Ramya, M.M.: Accurate object recognition in the underwater images using learning algorithms and texture features. Multimedia Tools Appl. 76(24), 25679–25695 (2017)

    Article  Google Scholar 

  20. Pavin, A.: Underwater object recognition in photo images. In: OCEANS 2015, MTS/IEEE Washington, pp. 1–6. IEEE (2015)

    Google Scholar 

  21. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  22. Sun, B., Saenko, K.: Deep CORAL: correlation alignment for deep domain adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_35

    Chapter  Google Scholar 

  23. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, C., Xie, W., Huang, Y., Yu, X., Ding, X. (2018). Weakly-Supervised Man-Made Object Recognition in Underwater Optimal Image Through Deep Domain Adaptation. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11305. Springer, Cham. https://doi.org/10.1007/978-3-030-04221-9_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04221-9_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04220-2

  • Online ISBN: 978-3-030-04221-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics