Skip to main content
Log in

Saliency based shape extraction of objects in unconstrained underwater environment

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Un-manned underwater exploration in unconstrained environment is a challenging and non-trivial problem. Manual analysis of large volume of images/videos captured by the underwater stations/vehicles is a major bottleneck for the underwater research community. Automated system for analyzing these videos is need of the hour for exploring the underwater space. In this paper, we present a method for extracting the shape of the objects present in the unconstrained underwater environment scenarios. The proposed method extracts the shape of the objects using saliency gradient based morphological active contour models. The uniqueness in the method is that the stopping condition for the active contour models is derived from the combination of saliency gradient with the gradient of the scene. As a result the proposed method is able to work in highly dynamic and unconstrained underwater environments. The results show that the proposed method is able to extract the shapes of the man-made as well as natural objects in these environmental conditions. The proposed method is able to detect shapes of multiple objects present in an underwater scene. The method is successful in extracting the shape of the occluded objects in such conditions. The results show that the proposed saliency gradient based morphological GAC extracts a minimum of 63% and average of 90% of the objects with misclassification rate of 4% whereas the saliency gradient based morphological ACWE extracts a minimum of 62% and average of 85% of the objects with misclassification rate of 4%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Álvarez L, Baumela L, Henríquez P, and Márquez-Neila P (2010) Morphological snakes, in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. 2197–2202

  2. Barat C, Rendas M-J (2006) A robust visual attention system for detecting manufactured objects in underwater video. OCEANS 2006:1–6

    Google Scholar 

  3. Barnes C, Best M, Bornhold B, Juniper S, Pirenne B, Phibbs P (2007) The NEPTUNE Project-a cabled ocean observatory in the NE Pacific: overview, challenges and scientific objectives for the installation and operation of Stage I in Canadian waters," in 2007 Symposium on Underwater Technology and Workshop on Scientific Use of Submarine Cables and Related Technologies. 308–313

  4. Bazeille S, Quidu I, Jaulin L (2012) Color-based underwater object recognition using water light attenuation. Intell Serv Robot 5:109–118

    Article  Google Scholar 

  5. Chaib S, Gu Y, Yao H, Zhao S (2016) A VHR scene classification method integrating sparse PCA and saliency computing," in Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International. 2742–2745

  6. Chan TF, Vese LA (2001) "active contours without edges," Image processing. IEEE Trans 10:266–277

    MATH  Google Scholar 

  7. Christian B, Ronald P (2010) A fully automated method to detect and segment a manufactured object in an underwater color image," EURASIP Journal on Advances in Signal Processing, vol. 2010

  8. Edgington DR, Salamy KA, Risi M, Sherlock R, Walther D, Koch C (2003) Automated event detection in underwater video, in OCEANS 2003. Proceedings 5:P2749–P2753

    Google Scholar 

  9. Gebali A, Albu AB, Hoeberechts M (2012) Detection of salient events in large datasets of underwater video: IEEE

  10. Griffiths G (2002) Technology and applications of autonomous underwater vehicles vol. 2: CRC Press

  11. Ha ML, Franchi G, Moller M, Kolb A, Blanz V (2018) Segmentation and Shape Extraction from Convolutional Neural Networks," in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). 1509–1518

  12. Heikkilä M, Pietikäinen M, Schmid C (2009) Description of interest regions with local binary patterns. Pattern Recogn 42:425–436

    Article  MATH  Google Scholar 

  13. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20:1254–1259

    Article  Google Scholar 

  14. Jaffe JS, Moore KD, McLean J, Strand MP (2001) Underwater optical imaging: status and prospects. Oceanography 14:66–76

    Article  Google Scholar 

  15. Jain SD, Xiong B, Grauman K (2017) Fusionseg: Learning to combine motion and appearance for fully automatic segmention of generic objects in videos, in Proc. CVPR

  16. Jin L, Liang H (2017) Deep learning for underwater image recognition in small sample size situations, in OCEANS 2017-Aberdeen. 1–4

  17. Kabatek M, Azimi-Sadjadi MR, Tucker JD (2009) An underwater target detection system for electro-optical imagery data: IEEE

  18. Kass M, Witkin A, Terzopoulos D (1988) Snakes: active contour models. Int J Comput Vis 1:321–331

    Article  MATH  Google Scholar 

  19. Ke Y, Sukthankar R, Hebert M (2007) Event detection in crowded videos, in Computer Vision. ICCV 2007. IEEE 11th International Conference on. 1–8

  20. Kim D, Lee D, Myung H, Choi H-T (2012) Object detection and tracking for autonomous underwater robots using weighted template matching, in OCEANS, 2012-Yeosu, 1–5

  21. Laptev I, Lindeberg T (2003) Space-time interest points, in 9th International Conference on Computer Vision, Nice, France. 432–439

  22. Leonard I, Arnold-Bos A, Alfalou A (2010) Interest of correlation-based automatic target recognition in underwater optical images: theoretical justification and first results, in SPIE Defense, Security, and Sensing. 76780O-76780O-12

  23. Li Y, Lu H, Li J, Li X, Li Y, Serikawa S (2016) Underwater image de-scattering and classification by deep neural network. Comput Electric Eng 54:68–77

    Article  Google Scholar 

  24. Liao S, Zhao G, Kellokumpu V, Pietikäinen M, Li SZ (2010) Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes," in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. 1301–1306

  25. Marquez-Neila P, Baumela L, Alvarez L (2014) A morphological approach to curvature-based evolution of curves and surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(1):2–17

  26. Min Han K, Taek Choi H (2011) Shape context based object recognition and tracking in structured underwater environment, in Geoscience and Remote Sensing Symposium (IGARSS), 2011 IEEE International,. 617–620

  27. Kumar Nitin, Sardana HK, Shome SN, Mittal N (2019) Saliency subtraction inspired automated event detection in underwater environment.

  28. Olmos A, Trucco E (2002) Detecting man-made objects in unconstrained subsea videos, in BMVC. 1–10

  29. Palazzo S, Kavasidis I, Spampinato C (2013) "Covariance based modeling of underwater scenes for fish detection," in ICIP. 1481–1485

  30. Qin H, Li X, Yang Z, Shang M (2015) When underwater imagery analysis meets deep learning: a solution at the age of big visual data, in OCEANS'15 MTS/IEEE Washington. 1–5

  31. Spampinato C, Chen-Burger Y-H, Nadarajan G, Fisher RB (2008) Detecting, tracking and counting fish in low quality unconstrained underwater videos. VISAPP 2008(2):514–519

    Google Scholar 

  32. Spampinato C, Palazzo S, Kavasidis I (2014) A texton-based kernel density estimation approach for background modeling under extreme conditions. Comput Vis Image Underst 122:74–83

    Article  Google Scholar 

  33. Sun X, Huang Z, Yin H, Shen HT (2017) An Integrated Model for Effective Saliency Prediction, in AAAI. 274–281

  34. Toshev A, Makadia A, Daniilidis K (2009) "Shape-based object recognition in videos using 3D synthetic object models," in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. 288–295

  35. Walther D, Edgington DR, Koch C (2004) Detection and tracking of objects in underwater video, in Computer Vision and Pattern Recognition 2004. CVPR 2004 Proc2004 IEEE Comput Soc Conf 1:I-544–I-549

    Google Scholar 

  36. Yuh J (2000) Design and control of autonomous underwater robots: a survey. Auton Robot 8:7–24

    Article  Google Scholar 

  37. Zhang D, Javed O, Shah M (2013) Video object segmentation through spatially accurate and temporally dense extraction of primary object regions, in Proceedings of the IEEE conference on computer vision and pattern recognition. 628–635

  38. Zhu Y, Chang L, Dai J, Zheng H, Zheng B (2016) Automatic object detection and segmentation from underwater images via saliency-based region merging," in OCEANS 2016-Shanghai. 1–4

  39. Zingaretti P, Zanoli SM (1998) Robust real-time detection of an underwater pipeline. Eng Appl Artif Intell 11:257–268

    Article  Google Scholar 

Download references

Acknowledgements

Nitin Kumar is thankful to CSIR-CSIO, Chandigarh for providing the funding and opportunity to carry out this work under the grant UnWaR. The authors gratefully acknowledge ONC for providing the underwater videos for this research work. The authors are also thankful to Neha for assisting in generating the ground truth.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to H. K. Sardana.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The ground truth contours for the frames are generated as shown in the appendix. The generated ground truth contours are used for the quantitative analysis. The appendix shows few illustrative examples of the generated contours.

Fig. 7
figure 7

Generated ground truth contours

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, N., Sardana, H.K. & Shome, S.N. Saliency based shape extraction of objects in unconstrained underwater environment. Multimed Tools Appl 78, 15121–15139 (2019). https://doi.org/10.1007/s11042-018-6849-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6849-9

Keywords

Navigation