Robot contact tasks in the presence of control target distortions

https://doi.org/10.1016/j.robot.2009.12.004Get rights and content

Abstract

This work refers to the problem of controlling robot motion and force in frictional contacts under environmental errors and particularly orientation errors that distort the desired control targets and control subspaces. The proposed method uses online estimates of the surface normal (tangent) direction to dynamically modify the control target and control space decomposition. It is proved that these estimates converge to the actual value even though the elasticity and friction parameters are unknown. The proposed control solution is demonstrated through simulation examples in three-dimensional robot motion tasks contacting both planar and curved surfaces.

Introduction

Manipulator force/motion control is in the heart of all robotic applications involving robot/environment interactions. In modern robot control tasks the robot acts in an unknown or partially known environment in which both the contacted objects and environmental conditions may not be preset. Application examples that involve unknown environments under conditions that hinder their identification include the exploration of unknown and dark spaces (tank, pipes, planets, etc.) or home environments where both the conditions and objects change dynamically. Such environments give rise to kinematic uncertainties that affect the stability and performance of classical interaction control schemes like impedance control, hybrid force/position control and parallel force/position control even when they are combined with the synergetic use of vision sensors. An account of the related works is given in Section 1.2 in the light of the effect of environmental errors on control targets discussed in Section 1.1.

In most robot contact tasks the acting part is the robot tip that comes in contact with the environment and hence it is customary to refer to the robot tip coordinates x6 (for some local parameterization of orientation) instead of the joint coordinates qnq. In many cases, the x coordinates are absolute in the sense that they describe the position of the robot end effector with respect to an inertia frame (e.g. with respect to the robot base). Then the relation between the q and x coordinates x=x(q) represents the nonlinear robot kinematics. However, the operation of a robot, particularly in contact tasks, is usually understood through some prescribed relation to its environment. This relation is mainly of a kinematic nature and it is therefore suitable to define the tool coordinates in a relative sense, i.e. with respect to the environment. Let ξdm be the desired motion trajectory defined in the environment space where m6; for example, in the task of scribing an ellipse on a planar surface, the desired motion trajectory may be defined in the two-dimensional surface space (m=2) by ξd2. Let π() be the kinematic mapping of a motion trajectory from the environment to the robot workspace. Its image π(ξd) is the robot tool target trajectory in absolute coordinates. The kinematic mapping is generally a lifting since its image always has the dimension of the robot’s workspace (6) while its rank is m6. The motion trajectory ξdm is here generalized, i.e. may contain both positions and orientations since the desired orientation trajectory may be related to the contacted environment; in the scribing task, for example, quality reasons may require the desired tool orientation to be specified with respect to the surface normal (by aligning the tool with the surface normal or by constraining it within a cone around it, etc.). Mapping π() generally involves the environment absolute coordinates denoted by xs. Let xs for a planar contacted surface consist of the position ps3 and three orientation parameters that correspond to the rotation matrix Rs=[nsosas] with its first column corresponding to the normal to the surface direction without loss of generality. In the simple scribing example, the kinematic mapping π() of the desired position trajectory ξd2 involves the rigid body transformation (ps, Rs). In fact, the desired position trajectory is lifted in the robot workspace to the target pd=Rs[0ξd]+ps.

In a contact task, some of the remaining motion coordinates are restricted in a way dependent on the existence of deformation on either or both sides of the contacting objects (tool and/or environment). Let χj, j6m, denote the restricted motion coordinates with respect to the environment. Then, in the case of infinitely rigid material, the contact is expressed as a geometric constraint that excludes motion in restricted directions (χ=0). In the case of deformable materials, the coordinates χj express the j deformations, i.e. the small-magnitude motions that are subject to elastodynamic forces. Reaction or elastic forces that appear acting along the constrained or deformation directions respectively are usually part of the control target in a contact problem. Thus, let fdj be the desired generalized force trajectory defined relative to the environment and ψ() the static mapping from the environment to the robot workspace that is again a lifting (j6) and involves the environment absolute coordinates; in the scribing example, the desired force trajectory may be defined in the one-dimensional space of the surface normal (j=1, fd+) to express the required pressing force in order to achieve scribing and is lifted in the robot workspace to the following force target Fd=Rs[fd02]=nsfd. Notice that in the direction of motion coordinates ξ there is either no resistance or some resistance exists due to the friction forces that appear in a direction opposite to the relative velocity at the contact point.

We have seen that kinematic and static mappings π() and ψ() depend on the absolute environment coordinates xs that may be input, sensed, calculated and/or estimated by some means in order to be used internally by the controller in producing the desired target in the robot workspace. It is therefore possible for the controller internal xs values to be different from the actual xs values due to measurement inaccuracies or uncertainties and/or lack of knowledge arising from surface misplacements. Consequently, the mappings π() and ψ() and their respective images are uncertain. Such environmental errors belong to kinematic uncertainties that may also include inexact mappings between the joint space and the robot tip space. We will subsequently denote uncertain quantities with hatted symbols as they express estimates of the reality and hence they are in general different from their actual values. Thus, in the case of environmental uncertainty (xˆs), the position and force control targets pˆd, Fˆd may be different from the actually desired goals pd, Fd. We will next discuss through the simple example of scribing simple paths on a surface how pˆd, Fˆd differ from pd, Fd under various types of environment error related to the relative translation and rotation between the actual and estimated surface.

  • Translation along the surface normal

    In this case, the distance from the surface is unknown, nsTpˆsnsTps, and hence the target path pˆd is displaced with respect to the desired pd by a bias along the surface normal. If the force control is designed so that contact maintenance is ensured, the desired path can be achieved as well as the desired force. In general, when the robot is in contact with the surface, the initial robot position can be an estimate of the surface position (Fig. 1(a)). This estimate is accurate in the case of infinitely rigid material on both contact sides. If, however, there is significant contact compliance on either or both the surface and the robot tip which is furthermore uncertain or unknown, such type of position uncertainty arises (Fig. 1(b)).

  • Translation along the surface tangent and/or rotation around the surface normal ns

    In the case of translation error, the target path pˆd is displaced with respect to the desired pd by a bias on the contacted surface while the target force expresses the desired force. For example, when it is desired to scribe an ellipse in the center of a page (Fig. 2(a) ellipse in dashed line page) and the page is translated (Fig. 2(a) solid line) then the target path is not centralized anymore. In the case of rotation error, the target path pˆd is rotated around the surface normal with respect to the desired pd. In the previous example the ellipse is rotated as compared to the desired goal (Fig. 2(b)). Notice that under these errors the target path is misplaced on the surface following the relative rotation and/or translation but it is not distorted.

  • Rotation around the os and as axes

    In this case the surface orientation is inexact and this is equivalent to the inexact knowledge of the surface normal vector, i.e. nˆs does not coincide with its actual value. With this type of relative rotation (around the os and as axes) Rˆs can be uniquely defined by nˆs. Such type of error seriously affects the control action because the motion and force targets pˆd, Fˆd are no longer compatible with the physical constraint imposed by the surface; the force target Fˆd is defined along a direction forming an angle with the direction normal to the actual surface while the target path pˆd does not lie on the actual surface. Even under the assumption that these errors do not destabilize the control system or result in contact loss, the target path is distorted with respect to the desired path; in fact, contact maintenance implies the projection of the target path on the actual surface that causes its distortion (Fig. 3(a)). The general rotation case that would additionally include a rotation around the ns axis would combine the effect of the second and third types of environmental error, i.e. the target would be both distorted and misplaced.

Last, it is possible for control targets not to correspond to the desired task objectives even though the environment position is exactly known, xˆs=xs. This problem arises when the actual surface geometry is different from the geometry implied by the kinematic and static mapping used to lift the desired targets; in the examples so far the mappings π() and ψ() assume a planar surface that in reality may be curved. Then, when contact is maintained, target path distortions arise due to the projection of the path on the actual curved surface (Fig. 3(b)). The force target is also affected since it is defined along a direction forming an angle with the actual normal direction at each point of the path. This case is, however, equivalent to a general rotation as regards the way it affects the control targets; the only difference is that the projection directions now vary with the path position since the normal (tangent) to the surface direction is not constant on a curved surface.

It is now clear that the most crucial effect of the environmental errors is the distortion of the targets that is caused by the inaccurate surface orientation. This work focuses on surface orientation uncertainties and the associated control problem; i.e. our concern is to find a controller that will achieve the desired contact task objectives despite the fact that the control targets are erroneously defined. Usually the force and position target trajectories are assumed to be accurately defined a priori and are not modified during the task execution. However, since target definition can be either based on an initial surface position estimate or, when vision is used, on a mapping from the image space to the robot work space, this hypothesis does not hold true in the case of inaccuracies. Then, the control target is only an estimate of the desired goal and therefore at best the control system will achieve a distorted target; at worst, the system will attempt to either exert force along a motion free coordinate or move along a restricted direction that may lead to instability or contact loss. As control schemes are mainly based on the decomposition of the control space into force and motion control, erroneous estimates of the control directions lead to false decomposition; then extra control effort is required to compensate for the distortion and online stabilize the closed loop system.

It is important to classify kinematic uncertainties based on the way they affect the control system into those that distort desired targets and control subspaces (e.g. surface orientation and surface geometry uncertainties) and those that do not (e.g. robot kinematic uncertainties). Environmental kinematic uncertainties have been shown to affect the steady state [1] or even the stability of robot motions [2], [3] in setpoint targets. Nevertheless, and despite uncertainties on constraint geometry, there have been some control solutions proposed in the literature for both regulation and trajectory following problems. These works can be classified into categories depending on the way they define the control target (known–distorted–modified) and/or they decompose the control space (estimate nˆs–measured online (ns)–identified online nˆs(t)ns). They are further characterized by whether or not they treat frictional contact, consider contact compliance and assume planar or curved surfaces. To our knowledge Table 1 summarizes the publications in this domain.

It is clear that the majority of these works consider a rigid contact type and measure online the normal to the surface direction using force measurements in order to decompose the control space [4], [5], [6], [7], [8], [9], [10], [11], [12]. In the case of frictionless contact, force measurements lie on the surface normal [4], [5], [6], [12], [20]; however, as force measurements are noisy signals and friction usually arises in practice the normal direction thus calculated may be erroneous. The friction effect in the calculation of the normal direction has been noticed in [8], where an algorithm to estimate the constraint direction is proposed that filters out force measurements along discrete position step directions; the same algorithm is also used in [9], [10]. In a remark in [7], filtering out of the friction forces from force measurements is also proposed with the use of a projection matrix that is constructed from current tip velocities when they are away from zero; this velocity-based filter takes advantage of the fact that in constrained robot motion the tip velocities lie on the surface tangent. In the case of a compliant contact, however, velocity-based force filtering may produce erroneous results since deformations allow small-magnitude motions along the constrained direction. For compliant contacts the authors have proposed in the past either constant estimates that, however, result in a control space decomposition that is non-compatible with the actual constraint [14], [15] or more recently controllers that online identify the actual normal surface direction both for setpoint [16], [17] and frictionless trajectory tracking problems [18]. Notice that in the case of setpoint problems the accurate control space decomposition is only required in the target point [11], [17]. On the other hand, trajectory tracking controllers require not only a control space decomposition over the whole of the position path but also constraint Jacobian derivatives [4], [6], [8]. The latter are calculated using force derivatives; however, force derivatives imply numerically differentiating the noisy force measurements. This is a problem that has been initially recognized in [7] and resolved using online estimates in respective control terms; alternatively, the constraint Jacobian derivative has been incorporated in the robot dynamic regressor in [5].

Some works in Table 1 implicitly assume that the position target is known in the sense that it coincides with the desired goal [4], [5], [8], [9], [10], [11], [13]. In fact, vision or optical sensing is heavily used in motion/force control problems in order to define the desired position target in the image space and identify the current tip position on it [5], [9], [10], [13]. Most of the rest use target projections and they thus achieve a distorted position/velocity target rather than the desired target [7], [14], [15], [16], [17], [18]. Last, [6], [20] consider a target that is not a priori defined but it is online determined using force measurements under the assumption of frictionless contact.

Some works in Table 1 consider additional sources of uncertainty; robot kinematic and dynamic parameter uncertainties are considered in [4], [5] and stiffness uncertainties in [18], [19]. These sources of uncertainty have been nevertheless considered separately in numerous publications; for example, stiffness parametric uncertainties are treated in [21], [22], [23], model stiffness uncertainties in [24], dynamic parameter uncertainties in [25] and dynamic model uncertainties in [24], [26].

The above account of the research works related to environmental kinematic uncertainties shows that the problem of target distortion has been mainly ignored while accurate control space decomposition in the case of trajectory following has been mainly limited to frictionless and rigid contact that allows force measurement-based calculation of the normal to the surface direction. On the other hand, vision-based approaches that define the desired target in the image space introduce additional uncertainties through the camera model but most importantly assume adequate lighting conditions. In contrast, this work proposes a control method that can work “blindly” using online estimates of the surface normal (tangent) direction to dynamically modify not only the control space decomposition but also the target in order to achieve the desired contact task objectives while identifying the surface slope. Furthermore, the proposed control solution is applied to the more practical case of a frictional compliant contact with unknown elasticity and friction parameters and is validated through simulation for both planar and curved surfaces.

Section snippets

System modeling

Consider a robot finger with a soft hemispherical fingertip of radius r in contact with a rigid flat surface. Let qnq be the vector of the generalized robot joint variables and {B} the inertia frame attached at the finger base (Fig. 4). Let the surface frame {s} be attached at some point on the surface described by position ps and orientation Rs=[nsosas] such that ns3 is pointing inwards. Consider also the frame {t} at the finger rigid tip with position p3 and rotation matrix Rt that can

Controller design

The desired task is for the robot tip to follow a trajectory ξd(t),ξ̇d(t)2 defined on the surface while exerting a force along the surface normal of magnitude fd(t)+. Notice that the desired position and force trajectories as well as the corresponding derivatives have been defined in a lower-dimensional space (two-dimensional for the position and one-dimensional for the force). Let tracking of the desired orientation trajectory φd(t),ωd(t)=φ̇d(t)3 be also required. We consider this

Simulation results

In the simulation example we consider a spatial three-dof manipulator with revolute joints with a soft hemispherical fingertip (Fig. 7). The robot has link lengths l2=0.3m, l3=0.2m, masses m2=0.4kg, m3=0.3kg and inertias Iz1=104kgm2, Iz2=Ix2=Iy2=6103kgm2, Iz3=Ix3=Iy3=2103kgm2. The fingertip radius is r=0.04m while the stiffness parameter is k=3500N/m. We consider viscous friction with friction coefficient c=5kg/s. The control purpose is to exert a time-variant normal force with magnitude fd

Conclusions

This work proposes a controller that achieves the desired contact task objectives despite the fact that the control targets are erroneously defined because of the existence of environmental errors. The proposed methodology uses current estimates of environment orientation that eventually converge to the actual value in order to modify the target and synthesize the control input so that motion and force desired trajectories are achieved. This is accomplished in the presence of friction and

Y. Karayiannidis was born in 1980. He received a diploma in Electrical and Computer Engineering in 2004 from the Aristotle University of Thessaloniki, Greece and a Ph.D. from the same University in 2009, working on the control of kinematically uncertain robotic systems under a grant from the State Scholarships Foundation of Greece. His research interests are in the area of robot control.

References (33)

  • C.C. Cheah et al.

    Stability of hybrid position and force control for robotic kinematics and dynamics uncertainties

    Automatica

    (2003)
  • Z. Doulgeri et al.

    Force position control for a robot finger with a soft tip and kinematic uncertainties

    Robotics and Autonomous Systems

    (2007)
  • D. Wang et al.

    Position and force control for constrained manipulator motion: Lyapunov’s direct approach

    IEEE Transactions on Robotics and Automation

    (1993)
  • D. Wang et al.

    Stability analysis of the equilibrium of a constrained mechanical system

    International Journal of Control

    (1994)
  • Z. Doulgeri et al.

    Performance analysis of a soft tip robotic finger controlled by a parallel force/position regulator under kinematic uncertainties

    IET Proceedings in Control Theory and Applications

    (2007)
  • C.C. Cheah, Y. Zhao, J. Slotine, Adaptive Jacobian motion and force tracking control for constrained robots with...
  • Y. Zhao, C.C. Cheah, J. Slotine, Adaptive vision and force tracking control for constrained robots, in: Proc. IEEE/RSJ...
  • D. Xiao et al.

    Sensor-based hybrid position/force control of a robot manipulator in an uncalibrated environment

    IEEE Transactions on Control Systems Technology

    (2000)
  • M. Namvar et al.

    Adaptive force–motion control of coordinated robot interacting with geometrically unknown environments

    IEEE Transactions on Robotics

    (2005)
  • T. Yoshikawa et al.

    Dynamic hybrid position/force control of robot manipulators—On-line estimation of unknown constraint

    IEEE Transactions on Robotics and Automation

    (1993)
  • K. Hosoda et al.

    Adaptive hybrid control for visual and force servoing in an unknown environment

    IEEE Robotics and Automation Magazine

    (1998)
  • A. Leite, F. Lizarralde, L. Hsu, Hybrid vision–force robot control for tasks on unknown smooth surfaces, in: Proc. IEEE...
  • D. Wang et al.

    Global stabilization for constrained robot motions with constrained uncertainties

    Robotica

    (1998)
  • T. Olsson, J. Bengtsson, R. Johansson, H. Malm, Force control and visual servoing using planar surface identification,...
  • Z. Doulgeri et al.

    A position/force control for a robot finger with soft tip and uncertain kinematics

    Journal of Robotic Systems

    (2002)
  • Y. Karayiannidis, Z. Doulgeri, An adaptive law for slope identification and force position regulation using motion...
  • Cited by (11)

    • A survey of robot manipulation in contact

      2022, Robotics and Autonomous Systems
    • Active Above-Knee Prosthesis: A Guide to a Smart Prosthetic Leg

      2020, Active Above-Knee Prosthesis: A Guide to a Smart Prosthetic Leg
    • CILAp-architecture for simultaneous position- And force-control in constrained manufacturing tasks

      2018, ICINCO 2018 - Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics
    View all citing articles on Scopus

    Y. Karayiannidis was born in 1980. He received a diploma in Electrical and Computer Engineering in 2004 from the Aristotle University of Thessaloniki, Greece and a Ph.D. from the same University in 2009, working on the control of kinematically uncertain robotic systems under a grant from the State Scholarships Foundation of Greece. His research interests are in the area of robot control.

    Z. Doulgeri is currently a Professor in Robotics and Control of Manufacturing Systems in the Department of Electrical and Computer Engineering of the Aristotle University of Thessaloniki, Greece. She received a diploma in Electrical Engineering in 1980 from the Aristotle University, an M.Sc. in Control Systems in 1982, an M.Sc. in Social and Economic Studies in 1983 and a Ph.D. in Mechanical Engineering in 1987 from Imperial College, London, UK. Her current research interests are in object grasping and manipulation by robot fingers, the control of robot contact tasks under kinematic uncertainties and the use of web telerobotics and virtual robotic environments in the teaching of robotics.

    View full text