Skip to main content

Advertisement

Log in

A novel robotic visual perception framework for underwater operation

针对水下作业的新型机器人视觉感知框架

  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Underwater robotic operation usually requires visual perception (e.g., object detection and tracking), but underwater scenes have poor visual quality and represent a special domain which can affect the accuracy of visual perception. In addition, detection continuity and stability are important for robotic perception, but the commonly used static accuracy based evaluation (i.e., average precision) is insufficient to reflect detector performance across time. In response to these two problems, we present a design for a novel robotic visual perception framework. First, we generally investigate the relationship between a quality-diverse data domain and visual restoration in detection performance. As a result, although domain quality has an ignorable effect on within-domain detection accuracy, visual restoration is beneficial to detection in real sea scenarios by reducing the domain shift. Moreover, non-reference assessments are proposed for detection continuity and stability based on object tracklets. Further, online tracklet refinement is developed to improve the temporal performance of detectors. Finally, combined with visual restoration, an accurate and stable underwater robotic visual perception framework is established. Small-overlap suppression is proposed to extend video object detection (VID) methods to a single-object tracking task, leading to the flexibility to switch between detection and tracking. Extensive experiments were conducted on the ImageNet VID dataset and real-world robotic tasks to verify the correctness of our analysis and the superiority of our proposed approaches. The codes are available at https://github.com/yrqs/VisPerception.

摘要

水下机器人操作通常需要视觉感知(如目标检测和跟踪),但水下场景视觉质量较差,且代表一种特殊分布,会影响视觉感知的准确性。同时,检测的连续性和稳定性对机器人感知也很重要,但常用的基于静态精度的评估(即平均精度(average precision))不足以反映检测器的时序性能。针对这两个问题,本文提出一种新型机器人视觉感知框架。首先,研究不同质量的数据分布与视觉恢复在检测性能上的关系。结果表明虽然分布质量对分布内检测精度几乎没有影响,但是视觉恢复可以通过缓解分布漂移,从而有益于真实海洋场景的检测。此外,提出基于目标轨迹的检测连续性和稳定性的非参考评估方法,以及一种在线轨迹优化(online tracklet refinement,OTR)来提高检测器的时间性能。最后,结合视觉恢复,建立精确稳定的水下机器人视觉感知框架。为了将视频目标检测(video object detection,VID)方法扩展到单目标跟踪任务,提出小交并比抑制(small-overlap suppression,SOS)方法,实现目标检测和目标跟踪之间的灵活切换。基于ImageNet VID数据集和真实环境下的机器人任务进行了大量实验,实验结果验证了所作分析的正确性及所提方法的优越性。代码公开在https://github.com/yrqs/VisPerception

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Yue LU designed the research. Yue LU, Xingyu CHEN, and Li WEN proposed the methods. Yue LU conducted the experiments. Junzhi YU processed the data. Zhengxing WU participated in the visualization. Yue LU drafted the paper. Zhengxing WU and Li WEN helped organize the paper. Xingyu CHEN and Junzhi YU revised and finalized the paper.

Corresponding author

Correspondence to Junzhi Yu  (喻俊志).

Additional information

Compliance with ethics guidelines

Yue LU, Xingyu CHEN, Zhengxing WU, Junzhi YU, and Li WEN declare that they have no conflict of interest.

List of electronic supplementary materials

Video S1 Underwater autonomous object search and grasping in real sea areas

Project supported by the National Natural Science Foundation of China (Nos. 61633004, 61725305, and 62073196) and the S&T Program of Hebei Province, China (No. F2020203037)

Electronic Supplementary Material

Supplementary material, approximately 18.7 MB.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, Y., Chen, X., Wu, Z. et al. A novel robotic visual perception framework for underwater operation. Front Inform Technol Electron Eng 23, 1602–1619 (2022). https://doi.org/10.1631/FITEE.2100366

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2100366

Key words

关键词

CLC number