Abstract
Nowadays, autonomous underwater vehicle (AUV) is playing an important role in human society in different applications such as inspection of underwater structures (dams, bridges). It has been desired to develop AUVs that can work in a sea with a long period of time for the purpose of retrieving methane hydrate, or rare metal, and so on. To achieve such AUVs, the automatic recharging capability of AUVs under the sea is indispensable and it requires AUVs to dock itself to recharging station autonomously. Therefore, we have developed a stereo-vision-based docking methodology for underwater battery recharging to enable the AUV to continue operations without returning surface vehicle for recharging. Since underwater battery recharging units are supposed to be installed in a deep sea, the deep-sea docking experiments cannot avoid turbidity and low-light environment. In this study, the proposed system with a newly designed active—meaning self-lighting—3D marker has been developed to improve the visibility of the marker from an underwater vehicle, especially in turbid water. Experiments to verify the robustness of the proposed docking approach have been conducted in a simulated pool where the lighting conditions change from day to night. Furthermore, sea docking experiment has also been executed to verify the practicality of the active marker. The experimental results have confirmed the effectiveness of the proposed docking system against turbidity and illumination variation.
















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Jasper A (2012) Oil/Gas pipeline leak inspection and repair in underwater poor visibility conditions: challenges and perspectives. J Environ Protect 3(5):394
Kume A, Maki T, Sakamaki T, Ura T (2013) A method for obtaining high-coverage 3D images of rough seafloor using AUV-real-time quality evaluation and path-planning-. JRM 25(2):364–374
Ribas D, Palomeras N, Ridao P, Carreras M, Mallios A (2012) Girona 500 auv: from survey to intervention. IEEE/ASME Trans Mechatron 17(1):46–53
Krupiński S, Allibert G, Hua MD, Hamel T (2012) Pipeline tracking for fully-actuated autonomous underwater vehicle using visual servo control. In: American control conference (ACC), 2012. IEEE, pp 6196–6202
Yu SC, Ura T, Fujii T, Kondo H (2001) Navigation of autonomous underwater vehicles based on artificial underwater landmarks. OCEANS, MTS/IEEE Confer Exhib 1:409–416
Cowen S, Briest S, Dombrowski J (1997) Underwater docking of autonomous undersea vehicles using optical terminal guidance. In OCEANS’97. MTS/IEEE Confer Proc 2:1143–1147
Eustice RM, Pizarro O, Singh H (2008) Visually augmented navigation for autonomous underwater vehicles. IEEE J Ocean Eng 33(2):103–122
Jung J, Cho S, Choi H T, Myung H (2016) Localization of AUVs using depth information of underwater structures from a monocular camera. In: 2016 13th international conference on ubiquitous robots and ambient intelligence (URAI). IEEE, pp 444–446
Ghosh S, Ray R, Vadali SR, Shome SN, Nandy S (2016) Reliable pose estimation of underwater dock using single camera: a scene invariant approach. Mach Vis Appl 27(2):221–36
Park JY, Jun BH, Lee PM, Lee FY, Oh JH (2009) Experiments on vision guided docking of an autonomous underwater vehicle using one camera. IEEE J Ocean Eng 36(1):48–61
Myint M, Yonemori K, Lwin KN, Yanou A, Minami M (2017) Dual-eyes vision-based docking system for autonomous underwater vehicle: an approach and experiments. J Intell Robot Syst. https://doi.org/10.1007/s10846-017-0703-6
Myint M, Yonemori K, Yanou A, Ishiyama S, Minami M (2015) Robustness of visual-servo against air bubble disturbance of underwater vehicle system using three-dimensional marker and dual-eye cameras. In: OCEANS'15 MTS/IEEE Washington. IEEE, pp 1–8
Myint M, Yonemori K, Yanou A, Lwin KN, Minami M, Ishiyama S (2016) Visual servoing for underwater vehicle using dual-eyes evolutionary real-time pose tracking. J Robot Mechatron 28(4):543–558
Myint M, Yonemori K, Yanou A, Lwin KN, Minami M, Ishiyama S (2016) Visual-based deep sea docking simulation of underwater vehicle using dual-eyes cameras with lighting adaptation. In: OCEANS 2016-Shanghai. IEEE, pp 1–8
Maki T, Shiroku R, Sato Y, Matsuda T, Sakamaki T, Ura T ( 2013) Docking method for hovering type AUVs by acoustic and visual positioning. In: 2013 IEEE international underwater technology symposium (UT). IEEE, pp 1–6
Minami M, Agbanhan J, Asakura T (2003) Evolutionary scene recognition and simultaneous position/orientation detection. Soft computing in measurement and information acquisition. Springer, Berlin Heidelberg, pp 178–207
Song W, Minami M, Aoyagi S (2008) On-line stable evolutionary recognition based on unit quaternion representation by motion-feedforward compensation. Int J Intell Comput Med Sci Image Process 2(2):127–139
Acknowledgements
The authors would like to thank Monbukagakusho; Mitsui Engineering and Shipbuilding Co., Ltd.; and Kowa Corporation for their collaboration and support for this study.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was presented in part at the 23rd International Symposium on Artificial Life and Robotics, Beppu, Oita, January 18–20, 2018.
About this article
Cite this article
Lwin, K.N., Mukada, N., Myint, M. et al. Docking at pool and sea by using active marker in turbid and day/night environment. Artif Life Robotics 23, 409–419 (2018). https://doi.org/10.1007/s10015-018-0442-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-018-0442-1