Teleoperation for space manipulator based on complex virtual fixtures

https://doi.org/10.1016/j.robot.2019.103268Get rights and content

Highlights

  • A Complex visual fixtures (VFs) is established for space manipulation task.

  • The VFs helps operator across the whole task from the remote side to the target nearby.

  • The VFs consist of a tube-type VF and a velocity-based VF with predictive ability.

  • The results present that the VFs reduce operation time and increase efficiency and accuracy.

Abstract

This paper presents a kind of complex virtual fixture (VF) to help space robots perform on-orbit operations in complex environments while ensuring operations safety. The main purpose of the VF is to provide virtual force feedback to adjust the manners of the operator throughout the remote operation process. The complex VF is comprised of a tube-type VF and a velocity-based VF. The tube-type VF ensures that the end effector approaches the target safely and at high speeds over long distances, and the velocity-based VF enables the end of the robot to observe the target within a safe and short distance near the target. Combined with dynamic prediction and path planning, the complex VF can improve the flexibility and efficiency of the operation, and avoid collisions in dynamic environments. The proposed methods are verified by several typical space manipulation tasks, the virtual experiment environment of which is built in CHAI3D. The comparative results indicate that the complex VF can reduce operation time and improve efficiency and accuracy.

Introduction

With the increase of space assembly and orbit adjustment tasks, the efficiency and security of space task operations have become increasingly more necessary [1]. Space robot teleoperation, which allows astronauts to avoid the physiological restrictions of extravehicular activities, is an effective technique to achieve efficiency and security [2], [3], [4], [5], [6], [7]. A major challenge of teleoperation is the time delay between space and ground, which is caused by the process in which the operator interacts with the operated object through the networks. The delayed vision, force, and other feedback information cannot provide real-time telepresence, which misguides the operator and impacts operation safety. Virtual fixtures (VFs) can enhance vision, force, audition, and other information in telepresence, provide correct operational guidance and restrictions, reduce the workload, and improve the operation accuracy of teleoperation [8], [9].

The concept of VFs was first proposed by Rosenberg. He applied force-based and vision-based virtual fixtures to the teleoperation assembly of axle holes, and demonstrated an efficiency increase of 70% [10]. Prada divided VFs into two types: Guidance VFs (GVFs) and Forbidden-Regions VFs (FRVFs). The force field generated by GVFs leads the object to the desired position, while the FRVFs prevent the object from conflicting with the surrounding environment [11]. Kuang designed VFs that were comprised of several basic shapes (such as spheres, cylinders, and cones) [12]. He indicated that the combination of different VFs can be used to solve corresponding tasks to ultimately achieve optimal VFs. Hidden Markov Models (HMMs) have been used for automatic segmentation and the recognition of user movement. A new algorithm for real-time HMM recognition was developed by Li and Okamura [13], and the segmentation results were used to provide appropriate assistance in a path-following and obstacle avoidance task. The HMM method improved the operation performance, in contrast with methods with and without the use of VFs. Marayong explored the effect of VF admittance on task execution performance by using a human–machine cooperative system. Using the velocity gain in the admittance controller, the robot speed has a linear relationship with the force applied by the operator. The level of VF guidance is determined by the admittance ratio [14]. FRVFs have been proposed for a general class of teleoperation control architectures to deal with unstable slave vibrations [15]. The discrete state-space method is a simple way to design and analyze the stable and transient behaviors of teleoperation systems. A method for implementing “steady-hand teleoperation control” on the master device by impedance-type VFs was proposed by Abbott et al. [16]. Combined with guidance VFs, the slave device is precisely constrained to the desired path and is suitable for impedance-type teleoperation systems, particularly in those of robot-assisted minimally invasive surgery. The technique of VFs has been widely used in the areas of surgery [17], [18], [19], co manipulation [20], [21], and even space surgery [22], [23].

The existing studies have designed many VFs for special teleoperation purposes that can enhance human operation performance [24], [25], such as disabilities performing vocational tasks and preoperative surgical planning, but few studies (e.g., that by Tu and Yu [26]) have concerned the entire operation process of composite VFs from the remote end to the vicinity of the target. Based on visual, force, and other sensory feedback, the end effector of the manipulator can be guided to approach the target from a long distance. The virtual tube is applied to the robot moving to a target at a high speed and over a long distance. The velocity-based VF restricts the movement of objects via force feedback to avoid collision with the target in close proximity.

The remainder of the article is organized as follows: the second section describes the general design specification and structure of the new complex VF. The VF is divided into several parts according to the distance of the target. In Section 3, the different sections of the VF – the virtual tube and the velocity-based VF – are described separately. In Section 4, two simulations are conducted to exhibit the advantages of the proposed VF. Finally, Section 5 summarizes the thesis.

Section snippets

General structure of the compound VF

In space teleoperation, the space robot is manipulated by the ground operator. Due to time delay and insufficient communication information, it is difficult for operators to send accurate commands throughout the remote operation. For short distance operations, the operator should control the robot by approaching the target at a relatively small speed. Any speed error can damage the target or even push it away from the workspace. A simple VF cannot meet the target requirements remotely to the

Proposed methods

In this section, the general methods of the virtual tube (guidance-type VF) and spherical speed-limit VF are presented separately.

Experimental setup

The experiment platform is comprised of a PHANToM [36] device and a computer controller (2.8 GHz CPU, 2 GB RAM), and an experiment scene is built by OSG (OpenSceneGraph) in the slave side, as presented in Fig. 9. The operator sends commands to the 6-DOF manipulator through the PHANToM haptic device from SensAble Corp. The slave robot is built as a visual module. The end of the virtual space robot interacts with the environment in the simulation scene, and the haptic forces are calculated by

Conclusions

In this paper, a new type of complex VF for space manipulation, consisting of a virtual tube and a velocity-based VF, is proposed. The virtual tube is suited for fast-speed operation over long distances, and the velocity-based VF is used for distances near the target. The proposed complex VF is different from the traditional methods used in static environments; the virtual tube and its center trajectory are designed for dynamic environments, and are characterized by target motion prediction

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (Grants No. 61725303 and No. 61773317), and the Fundamental Research Funds for the Central Universities, China (Grant No. 3102016BJJ03).

Zhengxiong Liu received the B. S. degree from Northwestern Polytechnical University (NPU), Xi’an, Shaanxi, PRC, in 2005, and the Ph.D. degree from NPU, in 2012, all in Navigation guidance and control. He is currently an associate professor in NPU. His research interests include space teleoperation.

References (36)

  • BowyerS.A. et al.

    Active Constraints/Virtual Fixtures: A Survey

    (2014)
  • FehlbergM.A. et al.

    Improved active handrest performance through use of virtual fixtures

    IEEE T. Human Mach. Syst.

    (2014)
  • L. Rosenberg, VFs: Perceptual tools for telerobotic manipulation, in: Virtual Reality A. Inter. Symp. Seattle, USA,...
  • PradaR. et al.

    On study of design and implementation of VFs

    Virtual Real.

    (2009)
  • A.B. Kuang, S. Payandeh, B. Zheng, Assembling VFs for guidance in training environment, in: 12th Int. Symp. Haptic...
  • M. Li, A.M. Okamura, Recognition of operator motions for real-time assistance using VFs, in: 11th Int. Symp. Haptic...
  • MarayongP. et al.

    Speed-accuracy characteristics of human-machine cooperative manipulation using VFs with variable admittance

    Human Factors: J. Hum. Factors Ergon. Soc.

    (2004)
  • J.J. Abbott, A.M. Okamura, Analysis of VFs contact stability for telemanipulation, in: IEEE/RSJ Int. Conf. Intel....
  • Cited by (0)

    Zhengxiong Liu received the B. S. degree from Northwestern Polytechnical University (NPU), Xi’an, Shaanxi, PRC, in 2005, and the Ph.D. degree from NPU, in 2012, all in Navigation guidance and control. He is currently an associate professor in NPU. His research interests include space teleoperation.

    Zhenyu Lu (S’14) received B.S. in 2010 from China University of Mining and Technology in 2010, and M.S. from Shenyang Aerospace University in 2013. He is currently a Ph.D. student in Research Center of Intelligent Robot at the Northwestern Polytechnical University. His research interests include teleoperation, human–computer interaction, mechatronics and parameter identification.

    Yang Yang received the B.S. degree from Xi’an University of Technology in 2014 and M.S. from Stevens Institute of Technology in 2017. He is currently a candidate Ph.D. in Research Center of Intelligent Robot at the Northwestern Polytechnical University. His research is focuses on teleoperation; human–machine interaction, intelligent control.

    Panfeng Huang (M’06-SM’17) received B.S. and M.S. degree from Northwestern Polytechnical University in 1998, 2001, respectively, and Ph.D. degree from the Chinese University of Hong Kong in the area of Automation and Robotics in 2005. He is currently a Professor of School of Astronautics and Vice Director of Research Center for Intelligent Robotics at the Northwestern Polytechnical University. His research interests include tethered space robotics, intelligent control, machine vision, and space teleoperation.

    View full text