Brief PaperOn the equivalence of causal LTI iterative learning control and feedback control☆
Introduction
In many practical control applications, a reference trajectory is repeated several times. For example, a robotic manipulator might perform the same movements thousands of times. This provides an opportunity to increase the accuracy of the system by learning from previous trials. This approach is called iterative learning control (ILC) and was originally proposed in Arimoto, Kawamura, and Miyazaki (1984).
The literature on ILC is vast. A survey of the field citing 254 papers is presented in Moore (1999). ILC research includes linear time-invariant (LTI) systems (Moore, 1993), discrete-time systems (Phan, Longman, & Moore, 2000), and nonlinear systems (Kuc, Lee, & Nam, 1992). Analyses consider specific ILC algorithms, such as derivative (D-type) (Arimoto, Kawamura, & Miyazaki, 1984) and proportional (P-type) (Saab, 1994), as well as algorithms containing general linear operators (Moore, 1993). Many analyses include ‘current-cycle feedback’ in the ILC algorithm (Chen, Xu, & Lee, 1996) to stabilize the plant and improve performance. The effect of disturbances and initial conditions is considered in Heinzinger, Fenwick, Paden, and Miyazaki (1992). Most ILC algorithms are based on causal operators, although non-causal operators have been considered in more recent work (Phan, Longman, & Moore, 2000; Chen & Moore, 2000; Longman, 2000; Elci, Longman, Phan, Juang, & Ugoletti, 1994; Wang, 2000).
In this paper, we consider causal LTI ILC with current cycle feedback. The LTI assumption allows the use of frequency-domain techniques. Related work on LTI ILC investigates performance and robustness (Liang and Looze, 1993) and optimization (Amann et al., 1998). A comparison between time and frequency domain stability results appears in Judd, Van Til, and Hideg (1993). It is shown in Amann and Owens (1994), that convergence to zero error is not possible if the plant is non-minimum phase. The design of ILC for non-minimum phase systems (with non-zero error) is considered in Roh, Lee, and Chung (1996).
This paper proves that for any causal LTI ILC, there exists a feedback control that achieves the ultimate tracking error of the ILC without iterations. Necessary conditions for ILC internal stability and convergence, derived in Goldsmith (2002), are used to prove feedback equivalence for the case of zero ultimate ILC error.
This paper is organized as follows. In Section 2, we introduce the general ILC and derive an expression for the ultimate tracking error. In Section 3, we obtain an equivalent feedback control that achieves the ultimate ILC error for the case when the ultimate ILC error is nonzero. Section 4 gives conditions for ILC internal stabilility and convergence. These conditions are used in Section 5 to obtain an internally stabilizing equivalent feedback for the case of zero ultimate ILC error. Section 6 concludes the paper.
Section snippets
Iterative learning control
In this section, we apply ILC to a linear time-invariant plant. The ILC includes a current-cycle feedback term, which may be set to zero if the plant is stable.
Let Lm denote the space of piecewise continuous, square integrable functions with norm
Since we will be considering systems that may be unstable, we also define the extended spacewhere
The system to be controlled is modelled aswhere ui∈Lem is a control
Equivalent feedback control
In this section, we show that if (I−F)−1 is defined, then the fixed point error e∞ of the ILC can be achieved in a single trial using feedback control. As we are considering only one trial of feedback control, we may drop the subscript i on the signals in (7), and write the open-loop system as
Conventional feedback control has the formwhere K is a causal LTI operator. Theorem 1 Suppose (I−F)−1is defined, and letThen the feedback control (16) applied to (15) givesu=u∞ande=e∞. Proof
Stability and convergence of ILC
Theorem 1 does not apply if (I−F)−1 is undefined. This case is of interest because F=I gives e∞=0 in (14). In Section 5, we extend our equivalence result to the case e∞=0 (for SISO systems) by proving that if the ILC system is internally stable and converges to zero error, then there is a high-gain feedback that achieves internal stability and arbitrarily small error (with no iterations). In this section, we present conditions for internal stability and stability of ILC.
The ILC system is
Feedback equivalence in the case of zero ultimate ILC error
In Section 4, conditions were obtained for ILC to be internally stable (bounded ui and ei in the presence of bounded noise and disturbance inputs) and to converge in the absence of noise and disturbance inputs. In this section, we apply these same requirements (internal stability plus convergence) to an equivalent feedback. For SISO systems, we show that if the ILC is internally stable and converges to zero error, then there exists an internally stabilizing high-gain feedback that converges to
Conclusion
Conventional feedback control is preferrable to causal ILC because it can achieve in one trial the same error that ILC achieves in an arbitrarily large number of trials. An equivalent feedback can always be constructed from the ILC parameters with no additional plant knowledge, whether or not the ILC itself includes feedback. If the ILC converges to zero error, the equivalent feedback is a high gain controller whose gain k plays the role of the iteration number i in the ILC. The internal
Peter Goldsmith received his B.Sc. and M.Eng. degrees in mechanical engineering from the University of Calgary, in 1987 and 1989, respectively, and his Ph.D. degree in mechanical engineering from the University of Toronto in 1995. He has held research positions with Nova, DRES, DREV, and DCIEM. He is currently an Associate Professor of Mechanical Engineering at the University of Calgary. His current research is in design, kinematics, and control of robots and iterative learning control.
References (20)
- et al.
Iterative learning control for a class of nonlinear systems
Automatica
(1993) - et al.
An iterative learning control theory for a class of nonlinear dynamic systems
Automatica
(1992) - Amann, N., & Owens, D. (1994). Non-minimum phase plants in iterative learning control. International conference on...
- et al.
Predictive optimal iterative learning control
International Journal of Control
(1998) - et al.
Bettering operation of robots by learning
Journal of Robotic Systems
(1984) - Chen, Y., & Moore, K. (2000), Improved path following for an omni-directional vehicle via practical iterative learning...
- Chen, Y., Xu, J., & Lee, T. (1996). Current iteration tracking error assisted iterative learning control of uncertain...
- et al.
Feedback control theory
(1992) - et al.
Automated learning control through model updating for precision motion control
ASME Adaptive Structures and Composite Materials: Analysis and Application
(1994) - Goldsmith, P. (2002). Stability, convergence, and feedback equivalence of LTI iterative learning control. 15th IFAC...
Cited by (0)
Peter Goldsmith received his B.Sc. and M.Eng. degrees in mechanical engineering from the University of Calgary, in 1987 and 1989, respectively, and his Ph.D. degree in mechanical engineering from the University of Toronto in 1995. He has held research positions with Nova, DRES, DREV, and DCIEM. He is currently an Associate Professor of Mechanical Engineering at the University of Calgary. His current research is in design, kinematics, and control of robots and iterative learning control.
- ☆
This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Khiang Wee Lim under the direction of Editor Frank L. Lewis.