Elsevier

Systems & Control Letters

Volume 85, November 2015, Pages 37-45
Systems & Control Letters

Extremum seeking control based on phasor estimation

https://doi.org/10.1016/j.sysconle.2015.08.010Get rights and content

Abstract

We present an extremum seeking control algorithm based on the estimation of the phasor of the perturbation frequency in the output of the plant. The phasor estimator is based on a continuous time Kalman filter, which is reduced into a variable gain observer by explicitly solving the special case of the Riccati equation. Local stability of the proposed algorithm for general non-linear dynamic systems using averaging and singular perturbations is presented for the single input case. The advantage of the presented algorithm is that it can be used on plants with large and even variable phase lag.

Introduction

Driving the state of a plant to some desired optimum is one of the overall objectives of any control system  [1]. This optimal state can be known or unknown. If it is known, then most of the system variables (formulated as plant outputs) have predefined values (formulated as reference inputs), and the job of the control system is to drive the outputs towards these values by adjusting the manipulated variables.

On the other hand, if the optimal state is unknown (i.e. the values of the objectives and system variable are unknown), then the job of the controller is to find this optimum state by adjusting the manipulated variables in order to reach as near optimum as possible. Extremum seeking control (ESC) is a control concept for single objective on-line optimization.

Consider a nonlinear, time varying plant with a single objective (sometimes also called index) that can be described by the following state space representation: dxdt=f(x,u)y=h(x) where xRn is a vector representing the state variables, with initial state x(0)=x0,uRm is a vector representing the manipulated (input) variables of the plant, and yR is a scalar representing the output objective (or index) of the plant. Both f:Rn×RmRn and h:RnR are assumed to be sufficiently smooth. The steady state output as a function of a constant input is assumed to have a minimum (or maximum). Without loss of generality, we will assume the latter case and then the purpose of the ESC controller is to adjust u in order to achieve a maximal value in y for any given x0.

In the literature many applications can be found, for example braking system control, autonomous vehicles and mobiles robots, yield optimization in bio-processing, etc.  [2] and more recently cone crushers  [3].

The first notable work was according to  [2], [4] done by LeBlanc in 1922. This work was the first literature about adaptive controllers and the suggested approach was based on estimating the gradient of the steady-state map by inserting a perturbation in the input. Extremum seeking control received a lot of attention between the 1940s and the 1960s, even with commercial controllers in the market  [5]. In the 1990s stochastic ESC  [6], [7] and sliding mode ESC  [8], [9] appeared.

In 2000, Krstić and Wang  [10] presented what can be considered the most valuable stability analysis of extremum seeking control for the classic filters based approach. Later, a non local, semi-global stability analysis was presented  [11]. Compared to the standard filter based method, an improved dynamic compensator method was presented later by Krstić  [12].

The first method for multi-parameter extremum seeking was presented in  [13], [14] and the algorithm from  [12] was extended to the multi-parameter case in  [15] where also a rigorous stability analysis was provided.

In 2009, Newton like extremum seeking control  [16] was presented. Later, a multi-parameter Newton like method was presented in  [17], [18].

The perturbation based approach is primarily based on the gradient descent optimization method. The controller is divided into three parts, as shown in Fig. 1. The first is the addition of a perturbation signal (normally a sinusoidal signal with amplitude a and angular frequency ω which is the common method  [10] or a random signal in the stochastic ESC  [6]). Next, a gradient estimator finds the rate of change of the output y with respect to the input u (i.e.  Km=yu). The third part is an integrator with gain k. The output of the integrator is the base control signal u0, which is added to the perturbation signal to generate the control signal u.

The classic method to estimate the gradient is to use a high pass filter (HPF), a multiplier, and a low pass filter (LPF)  [10]. Also it was shown in  [11] that it is possible to estimate the gradient with only a multiplier or a multiplier with LPF only.

Another novel approach for gradient estimation is to use an Extended Kalman Filter (EKF)  [19], [20]. The method was suggested for static systems or very fast dynamic systems and the frequency of the added dither signal should be selected to be slower than the slowest time constant of the process  [19]. The idea behind using the EKF as a gradient estimator is to approximate the output of the system (y) by a tangent at the point of operation. i.e.  y(t)y0+ku(t) where k is the slope of the tangent. The EKF is then employed to estimate two state variables, x1=k and x2=y0. Assuming that the EKF will be implemented in discrete time, two samples are required to have observability  [20]. The discrete state space system is then x(tk+1)=[1001]x(tk)+wk,[y(tk)y(tkn)]=[u(tk)1u(tkn)1]x(tk)+vk where n is the time interval between the two samples which is usually selected to equal a quarter or three quarters of the cycle time of the perturbation signal (i.e.  π/(2ω) or 3π/(2ω)). The noise signals wk and vk have covariance matrices Q and R respectively. Similar to the classic filter based methods, the selection of ω remains crucial.

Most of the existing perturbation based methods require a very slow perturbation in such a way that the system will appear as a static map. This will ensure convergence to the optimal solution, but will slow down the system response. Increasing the perturbation frequency will allow increasing the integrator gain k which will lead to a faster response but may lead to a sub-optimal solution  [21] since it can no longer be considered as a static map. The problem can be mitigated by adding a phase compensation, but this may instead lead to instability of the overall system especially in the case of the time varying phase lag  [21].

In this work we present an ESC algorithm that is based on estimating the phasor of the output instead of the gradient. In this way, large phase shifts of the plant can be allowed thus relaxing the assumption that the plant is a static map or that the perturbation frequency is very low. The estimator will be based on a variable gain observer which is derived from the continuous time Kalman filter. In simulations we will demonstrate how this algorithm can be preferred in the case of systems with variable phase.

Section snippets

The proposed approach

If we add a slow sinusoidal perturbation to the input, the output will exhibit a periodic (almost a sinusoidal) component with different magnitude and phase shift, but with the same frequency. This function may be approximated by a combination of three components: a constant component, a sine component, and a cosine component, as shown in Fig. 2. We can notice that the amplitude of the sine and the cosine component are related to the current point of operation. The sine component has a positive

Multivariable extremum seeking

The concept of Phasor ESC can be extended to the multi-variable case. Similar to the above analysis, small sinusoidal perturbation signals with frequencies (ω1,ω2,,ωn) are added to the control signals u0(u0,1,u0,1,,u0,n), where ωi/ωj are rational, with the frequencies chosen such that ωiωj and ωi+ωjωk for distinct i,j, and k   [13], [14], [15].

Accordingly, it can be shown that the output of the plant y can be approximated to: yβ0+j=1nαj,1sin(ωjt)+j=1nβj,1cos(ωjt) where β0=f(u0)αj,1Kjcos(

Stability analysis

Considering the single input case of system (1), we assume that we know a control law  [10]u=α(x,θ). This control law is a function of θ, which is assumed to behave like a static steady state feedback law  [10]. Accordingly the system (1a) can be written dxdt=f(x,α(x,θ)) and is parameterized by θ. We will make some assumptions about the existence and stability of an equilibrium point similar to those made in  [10], [17]. For more details, please refer to  [10].

Assumption 1

There exists a smooth function l:R

Simulation

Let us consider a system with the following state space representation: x1̇=10(x1+θ)x2̇=x2(x15aθsin(ωθt))20.05θ2+0.2x3̇=8(x3+x2)y=x3. In this system, θ=5+aθsin(ωθt) which is varying with time in a manner governed by the values of aθ and ωθ. By setting aθ=0, a linearization of the system exhibits a phase shift that varies with θ, i.e. if we insert an input in the form of θ+asin(ωt) the output of the system will have a sinusoidal component with relative phase shift that varies with θ as

Conclusion and future work

The suggested phasor based ESC algorithm was shown to be locally stable similar to other gradient based algorithms. Moreover, using the cosine component of the phasor for feedback lead to an enhanced performance of the overall system in terms of ability to deal with phase lag. A stability proof for the case of sine component feedback was enabled by an explicit solution of the Riccati equation in the continuous Kalman filter which also simplifies the implementation of the controller. The

References (33)

  • S.-J. Liu et al.

    Stochastic Averaging and Stochastic Extremum Seeking

    (2012)
  • S. Drakunov et al.

    ABS control using optimum search via sliding modes

    IEEE Trans. Control Syst. Technol.

    (1995)
  • Y. Pan et al.

    Stability and performance improvement of extremum seeking control with sliding mode

    Internat. J. Control

    (2003)
  • M.A. Rotea

    Analysis of multivariable extremum seeking algorithms

  • G.C. Walsh

    On the application of multi-parameter extremum seeking control

  • K.B. Ariyur et al.

    Analysis and design of multivariable extremum seeking

  • Cited by (0)

    View full text