1 Introduction

In dynamic system theory, the forward problem is the system behavior description using mathematical models. These expressions can be established using analytical development and physical laws, which is difficult in most cases. To deal with this problem, the system behavior can be approximated using the identification theory[1,2].

In this context, there is several methods of identification and the most used are those called recursive methods. Indeed, they are frequently used because of their high performance and capability to approximate a large class of systems[36].

In order to simplify the use of these techniques, several toolboxes and graphical user interfaces (GUI) were implemented in Matlab environment and we can mention, frequency domain system identification toolbox[7], the continuous-time system identification (CONTSID) toolbox[8], recursive identification algorithms library[9], the mathworks system identification toolbox[10], the interactive tool for system identification education (ITSIE)[11], the computer aided program for time-series analysis and identification of noisy systems (CAPTAIN) toolbox[12], and the university of newcastle identification toolbox (UNIT)[13]. However, these tools are, in most cases, not easy to use, commercial and need a minimum of information on Matlab environment manipulations. Furthermore, these tools are relatively limited to specific methods and do not allow the comparison between the main identification techniques.

In this paper, we present RIM which is an identification platform that includes the main recursive methods. This platform is developed using Matlab GUI[14], and is non-commercial and freely available. RIM possesses the advantage to be easy to use by beginners and students to validate the theoretical results without any need to learn about Matlab programming environment and manipulations.

The paper is organized as follows. In Sections 2 and 3, the main recursive identification methods are presented. Description of the RIM GUI is given in Section 4. Application examples are discussed in Section 5.

2 Recursive identification algorithms

The identification problem can take the form as

$$y(t) = {\theta ^{\rm{T}}}\varphi (t) + e(t)$$
(1)

where y(t) is the system output, θ is the parameter vector to be estimated, ϕ is the observations vector, and e(t) is a white noise with zero mean value and constant variance. In this paper, the superscript T denotes the transpose. The dimensions of ϕ and θ depend on the model choice.

In this section, we give a survey on the most known algorithms used in systems identification.

2.1 Stochastic gradient (SG) algorithm

From (1), we define the criterion J1(t) as

$$\mathop {\min}\limits_{\hat \theta (t)} {J_1}(t) = {\rm{E}}\{{\Vert {y(t) - {{\hat \theta}^{\rm{T}}}(t)\varphi (t)} \Vert ^2}\} .$$
(2)

Minimizing this criterion and using the stochastic gradient search principle, we obtain the stochastic gradient identification algorithm[15]

$$\hat \theta (t) = \hat \theta (t - 1) + {{\varphi (t)} \over {r(t)}}\left[ {y(t) - {{\hat \theta}^{\rm{T}}}(t - 1)\varphi (t)} \right]$$
(3)
$$r(t) = r(t - 1) + {\left\Vert {\varphi (t)} \right\Vert ^2},\quad r(0) = 1$$
(4)

where

$${\left\Vert X \right\Vert ^2} = {\rm{tr}}\left[ {X\;{X^{\rm{T}}}.} \right]$$

2.2 Recursive least squares (RLS) algorithm

For this algorithm, the criterion J2(t) is used:

$$\mathop {\min}\limits_{\hat \theta (t)} {J_2} = {1 \over 2}\sum\limits_{i = 1}^t {{{[y(i) - {{\hat \theta}^{\rm{T}}}\varphi (i)]}^2}.}$$
(5)

Minimizing this criterion, we obtain

$$\hat \theta (t) = \hat \theta (t - 1) + F(t)\varphi (t)\varepsilon (t)$$
(6)

with

$$\varepsilon (t) = y(t) - {\hat \theta ^{\rm{T}}}(t - 1)\varphi (t)$$

and

$$F(t) = (t - 1) - {{F(t - 1)\varphi (t){\varphi ^{\rm{T}}}(t)F(t - 1)} \over {1 + {\varphi ^{\rm{T}}}(t)F(t - 1)\varphi (t)}}$$
(7)

where F(t) is the adaptation gain and ε(t) is the prediction error.

The formula of the adaptation gain is generalized by introducing factors λ1(t), λ2(t)[16] and it is given by

$$F(t) = {1 \over {{\lambda _1}(t)}}\left({F(t - 1) - {{F(t - 1)\varphi (t){\varphi ^{\rm{T}}}(t)F(t - 1)} \over {{\textstyle{{{\lambda _1}(t)} \over {{\lambda _2}(t)}}} + {\varphi ^{\rm{T}}}(t)F(t - 1)\varphi (t)}}} \right)$$
(8)

with

$$0 \leq {\lambda _1}(t) \leq 1\quad \quad {\rm{and}}\quad \quad 0 \leq {\lambda _2}(t) \leq 2.$$

The effects of λ1(t) and λ2(t) on the adaptation gain are summarized in Table 1.

Table 1 Effects of λ1(t) and λ2 (t)

2.3 Multi-innovation stochastic gradient (MISG) algorithm

The multi-innovation stochastic gradient (MISG) was proposed in [17], in order to achieve fast convergence rates and improve the parameter estimation accuracy[18,19]. The basic idea of this algorithm is to use not only the current data {y(t), ϕ(t)} but also the past data {y(ti), ϕ(ti), i = 1, 2, ⋯, p − 1}, where p is a positive integer denoting the innovation length.

Define the stacked output vector Y(p, t) and stacked information matrix φ(p, t) as

$$Y(p,t) = \left[ {\matrix{{y(t)} \cr{y(t - 1)} \cr\vdots \cr{y(t - p + 1)} \cr}} \right],{\phi ^{\rm{T}}}(p,t) = \left[ {\matrix{{{\varphi ^{\rm{T}}}(t)} \cr{{\varphi ^{\rm{T}}}(t - 1)} \cr\vdots \cr{{\varphi ^{\rm{T}}}(t - p + 1)} \cr}} \right].$$

Minimize the quadratic criterion defined by

$$\mathop {\min}\limits_{\hat \theta (t)} {J_3}(t) = {\left\Vert {Y(p,t) - {\phi ^{\rm{T}}}(p,t)\hat \theta (t)} \right\Vert ^2}.$$
(9)

Finally, the multi-innovation stochastic gradient is obtained as

$$\hat \theta (t) = \hat \theta (t - 1) + {{\phi (p,t)} \over {r(t)}}\left[ {Y(p,t) - \phi (p,t)\hat \theta (t - 1)} \right]$$
(10)
$$r(t) = r(t - 1) + {\left\Vert {\varphi (t)} \right\Vert ^2},\quad \;\;r(0) = 1$$
(11)
$$\phi (p,t) = [\varphi (t),\varphi (t - 1), \cdots ,\varphi (t - p + 1)]$$
(12)
$$Y(p,t) = {[y(t),y(t - 1), \cdots ,y(t - p + 1)]^{\rm{T}}}.$$
(13)

When p = 1, the MISG algorithm reduces to the SG algorithm.

2.4 Multi-innovation least squares (MILS) algorithm

The minimized criterion for this algorithm is given by

$$\mathop {\min}\limits_{\hat \theta (t)} {J_4}(t){1 \over 2}\sum\limits_{i = 1}^t {{{[y(i) - \phi {{(p,i)}^{\rm{T}}}\hat \theta ]}^2}.}$$
(14)

Then, the multi-innovation recursive identification algorithm is obtained[19,20] as

$$\hat \theta (t) = \hat \theta (t - 1) + L(t)\left[ {Y(p,t) - {\phi ^{\rm{T}}}(p,t)\hat \theta (t - 1)} \right]$$
(15)
$$L(t) = F(t - 1)\phi (p,t){\left[ {{I_p} + {\phi ^{\rm{T}}}(p,t)F(t - 1)\phi (p,t)} \right]^{- 1}}$$
(16)
$$F(t) = F(t - 1) - L(t){\phi ^{\rm{T}}}(p,t)F(t - 1)$$
(17)
$$\phi (p,t) = [\varphi (t),\varphi (t - 1), \cdots ,\varphi (t - p + 1)]$$
(18)
$$Y(p,t) = {[y(t),y(t - 1), \cdots ,y(t - p + 1)]^{\rm{T}}}.$$
(19)

When p = 1, the MILS algorithm reduces to the RLS algorithm.

3 Recursive identification techniques

The recursive identification techniques are divided into two categories. The first one is based on the whitening of the prediction error. The second is based on the uncorrelation of observation vector and prediction error[21, 22]. In this section, we give a survey on the most known techniques of these two categories.

For the first category, the four well known methods are the RLS, extended least squares (ELS), generalized least squares (GLS) and output error with extended prediction model (OEEPM).

3.1 Recursive least squares

The selected model of this technique is autoregressive with external input (ARX) given by

$$A({q^{- 1}})y(t) = {q^{- d}}B({q^{- 1}})u(t) + e(t)$$
(20)

with

$$\matrix{{A({q^{- 1}}) = 1 + \sum\limits_{i = 1}^{{n_A}} {{a_i}{q^{- i}}}} \hfill \cr{B({q^{- 1}}) = \sum\limits_{i = 1}^{{n_B}} {{b_i}{q^{- i}}} .} \hfill \cr}$$

Hence, the model of (20) can be written as

$$y(t) = {\theta ^{\rm{T}}}\varphi (t) + e(t)$$
(21)

where

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots {{\hat b}_{{n_B}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - y(t - 1), \cdots , - y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B})].} \hfill \cr}$$

3.2 Extended least squares

In this method, the model is autoregressive moving average with external input (ARMAX) given by

$$A({q^{- 1}})y(t) = {q^{- d}}B({q^{- 1}})u(t) + C({q^{- 1}})e(t)$$
(22)

where

$$C({q^{- 1}}) = 1 + \sum\limits_{i = 1}^{{n_C}} {{c_i}{q^{- i}}} .$$

From (22), we have

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}},{{\hat c}_1}, \cdots ,{{\hat c}_{{n_C}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - y(t - 1), \cdots , - y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B}),e (t - 1), \cdots ,e (t - {n_C})].} \hfill \cr}$$

We replace e(t) by ε(t) in the observation vector because e(t) is not observable. Finally, we obtain

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}},{{\hat c}_1}, \cdots ,{{\hat c}_{{n_C}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - y(t - 1), \cdots , - y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B}),\varepsilon (t - 1), \cdots ,\varepsilon (t - {n_C})].} \hfill \cr}$$

3.3 Generalized least squares

This method is adapted to ARARX models as

$$A({q^{- 1}})y(t) = {q^{- d}}B({q^{- 1}})u(t) + {{e(t)} \over {C({q^{- 1}})}}.$$
(23)

Then, we have

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}},{{\hat c}_1}, \cdots ,{{\hat c}_{{n_C}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - y(t - 1), \cdots , - y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B}),\alpha (t - 1), \cdots ,\alpha (t - {n_C})].} \hfill \cr}$$

The estimated value of α(t) is given by

$$\alpha (t) = \hat A({q^{- 1}})y(t) - {q^{- d}}\hat B({q^{- 1}})u(t).$$
(24)

3.4 Output error with extended prediction model

The algorithm of this one is the same as the ELS with

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}},{{\hat h}_1}, \cdots ,{{\hat h}_{{n_C}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - \hat y(t - 1), \cdots , - \hat y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B}),\varepsilon (t - 1), \cdots ,\varepsilon (t - {n_C})].} \hfill \cr{{{\hat c}_i} = {{\hat h}_i} + {{\hat a}_i}.} \hfill \cr}$$

For the second category, the 4 main methods are the instrumental variable with auxiliary to model (IVAM), output error with fixed compensator (OEFC), output error with filtered observations (OEFO) and output error with adaptive filtered observations (OEAFO).

3.5 Instrumental variable with auxiliary model

For this technique, we have

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}}, \cdots ,{{\hat b}_1}, \cdots {{\hat b}_{{n_B}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - {y_{IV}}(t - 1), \cdots , - {y_{IV}}(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B})].} \hfill \cr}$$

where the prediction auxiliary model is defined by

$${y_{IV}}(t) = - \sum\limits_{i = 1}^{{n_A}} {{{\hat a}_i}} {y_{IV}}(t - i) + \sum\limits_{i = 1}^{{n_B}} {{{\hat b}_i}u(t - d - i).}$$
(25)

3.6 Output error with fixed compensator

The disturbed output y(t) is replaced by the prediction \(\hat y(t)\) in the RLS predictor and we obtain

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - \hat y(t - 1), \cdots , - \hat y(t - {n_A}),u(t - 1 - d), \cdots ,u(t -} \hfill \cr{\quad \quad d - {n_B})].} \hfill \cr}$$

3.7 Output error with filtered observations

This method is based on the same principle of the previous one with a filtered observation vector obtained by a stable filter A0(q−1). Thus,

$$\matrix{{\varepsilon (t) = y(t) - {{\hat \theta}^{\rm{T}}}\varphi (t)} \hfill \cr{F(t) = F(t - 1) - {{F(t - 1){\varphi _f}(t){\varphi _f}{{(t)}^{\rm{T}}}F(t - 1)} \over {1 + {\varphi _f}{{(t)}^{\rm{T}}}F(t - 1){\varphi _f}(t)}}} \hfill \cr{\hat \theta (t) = \hat \theta (t - 1) + F(t - 1) + F(t){\varphi _f}(t)\varepsilon (t)} \hfill \cr}$$

with

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - \hat y(t - 1), \cdots , - \hat y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B})]} \hfill \cr{\varphi _f^{\rm{T}} = {1 \over {{A_0}({q^{- 1}})}}{\varphi ^{\rm{T}}}.} \hfill \cr}$$

3.8 Output error with adaptive filtered observations

In this method, the observation vector is filtered by an estimated filter \(\hat A({q^{- 1}})\). Hence,

$$\matrix{{\varepsilon (t) = y(t) - {{\hat \theta}^{\rm{T}}}\varphi (t)} \hfill \cr{F(t) = F(t - 1) - {{F(t - 1){\varphi _f}(t){\varphi _f}{{(t)}^{\rm{T}}}F(t - 1)} \over {1 + {\varphi _f}{{(t)}^{\rm{T}}}F(t - 1){\varphi _f}(t)}}} \hfill \cr{\hat \theta (t) = \hat \theta (t - 1) + F(t){\varphi _f}(t)\varepsilon (t)} \hfill \cr}$$

where

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{n_B}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - \hat y(t - 1), \cdots , - \hat y(t - {n_A}),u(t - 1 - d), \cdots ,} \hfill \cr{\quad \quad u(t - d - {n_B})]} \hfill \cr{\varphi _f^{\rm{T}} = {1 \over {\hat A({q^{- 1}})}}{\varphi ^{\rm{T}}}.} \hfill \cr}$$

3.9 Auxiliary model identification algorithm

This technique was proposed in [20, 23] for output error moving average (OEMA) system given by

$$y(t) = {{B({q^{- 1}})} \over {A({q^{- 1}})}}u(t) + D({q^{- 1}})e(t).$$
(26)

From (26), we have

$$y(t) = x(t) + D({q^{- 1}})e(t)$$
(27)

with

$$D({q^{- 1}}) = 1 + \sum\limits_{i = 1}^{{n_D}} {{d_i}{q^{- i}}} .$$

Define the parameter vector

$$\begin{array}{*{20}c} {\theta = \left[ {\begin{array}{*{20}c} {\theta _s } \\ {\theta _n } \\ \end{array} } \right]} \\ {\theta _s^T = \left[ {a_1 , \cdots ,a_{n_A } ,b_1 , \cdots b_{n_B } } \right]} \\ {\theta _n^T = \left[ {d_1 ,d_2 , \cdots ,a_{n_D } } \right]} \\ \end{array} $$

and the observations vector

$$\begin{array}{*{20}c} {\phi = \left[ {\begin{array}{*{20}c} {\phi _s } \\ {\phi _n } \\ \end{array} } \right]} \\ {\phi _s^T = \left[ { - x(t - 1), \cdots , - x(t - n_A ),u(t - 1), \cdots ,u(t - n_B )} \right]} \\ {\phi _n^T = \left[ {e(t - 1), \cdots ,e(t - n_D )} \right].} \\ \end{array} $$

We choose B a (q−1)/A a (q−1) as the transfer function of the auxiliary model and x a (t) as its output. The unknown inner variables x(t − i) are replaced by x a (t i), and unmeasurable noise terms e(ti) are replaced by their estimated residuals ε(ti).

Finally, we obtain

$$\matrix{{{{\hat \theta}^{\rm{T}}} = [{{\hat a}_1}, \cdots ,{{\hat a}_{{n_A}}},{{\hat b}_1}, \cdots ,{{\hat b}_{{b_B}}},{{\hat d}_1}, \cdots ,{{\hat d}_{{n_D}}}]} \hfill \cr{{\varphi ^{\rm{T}}} = [ - {x_a}(t - 1), \cdots , - {x_a}(t - {n_A}),u(t - 1), \cdots ,} \hfill \cr{\quad \quad u(t - {n_B}),\varepsilon (t - 1), \cdots ,\varepsilon (t - {n_D}).} \hfill \cr}$$

For each identification method, one of the previous algorithms described in Section (2) can be used to estimate the parameter vector.

For validation of the various identified models, the estimations of the normalized autocorrelations RN(i) are calculated. RN(i) is defined for the 1st category of identification methods and the whiteness test is

$$RN(i) = {{R(i)} \over {R(0)}},\quad \quad i = 1,2, \cdots ,\max ({n_A},{n_B} + d)$$
(28)

with

$$R(i) = {1 \over N}\sum\limits_{t = 1}^N {\varepsilon (t)\varepsilon (t - i),} \quad \quad R(0) = {1 \over N}\sum\limits_{t = 1}^N {{\varepsilon ^2}} (t).$$
(29)

For the 2nd category, the uncorrelation test is applied and we have

$$RN(i) = {{R(i)} \over {\sqrt {\left({{1 \over N}\sum\limits_{t = 1}^N {{{\hat y}^2}(t)} \left({{1 \over N}\sum\limits_{t = 1}^N {{\varepsilon ^2}} (t)} \right)} \right)}}}$$
(30)

with

$$R(i) = {1 \over N}\sum\limits_{t = 1}^N {\varepsilon (t)\hat y(t - i)}$$
(31)

where ε(t) is output prediction error, y(t) is system output, \(\hat y(t)\) is model output, and N is data number.

The identified model is valid if \(\vert RN(i)\vert \leq {{2.17} \over {\sqrt N}}\). In practice, the used criterion is |RN(i)| ≤ 0.15[22].

4 Description of the RIM toolbox

p ]The proposed RIM GUI is developed using tools available in Matlab/GUI and all algorithms described in Sections 2 and 3 are programmed using Matlab environment in such way that the user does not need Matlab or any other programming language experience.

To start the RIM platform, you have to enter into the Matlab command line: RIM.

The RIM window appears on the screen as shown in Fig. 1.

Fig. 1
figure 1

RIM window

In this platform, the user feels free to use classical menu or buttons. Indeed, in the menu, we set the links of different buttons. Furthermore, RIM allows user to save or print the identification process results in several formats such as *.m, *.txt and *.doc using the option “Save as ⋯” and “Print” in the menu. RIM also allows plotting the variables (identification error, error autocorrelation, etc.) in external figures. Furthermore, the RIM shows a message indicating if the model is valid or not based on the values of the error autocorrelation.

The user can also jump from the identification category to the other by pressing the button “Next” or “Previous”. For each button, we set a shortcut such as CTRL+S for the option “Save as ⋯”, CTRL+P for “Print”, etc.

The use of RIM can be summarized on 3 steps and they are the main steps of any identification process as shown in Fig. 2. The first step is the generation of I/O file or data file. The second is the choice of the techniques to be used for system identification. And the last one is the selection of the identification parameters such as the polynomials orders n A , n B and n C , the initial adaptation gain F(0) and the forgetting factor λ1 (t).

Fig. 2
figure 2

Identification steps

5 Application examples

5.1 Example 1: Identification of a piezoelectric actuator

In order to illustrate the application and the performance of the RIM GUI, this section discusses the identification of a piezoelectric actuator (PEA) which is a nonlinear system.

The experimental data used for the identification corresponds to the piezoelectric actuator APA-120ML. The excitation signal is a sinusoidal signal of frequency 50 Hz: u(t) = 68.5 sin(27π × 50t + 0.44) + 61.5(V).

5.1.1 Identification of ARX model

The pezoelectric actuators are generally modeled by a second-order system. Therefore, the polynomials orders of the model are selected as follows: n A = 2,n B = 2, n C = 1 and d = 0. The identification parameters used in this paper are summarized in Table 2.

Table 2 Identification parameters

The identification results using the traditional RLS method are shown in Fig. 3. From Fig. 3, the values of |RN(i)| are all less than 0.15, which means that this model is valid and the system can be represented by the ARX model as

$$\matrix{{y(t) = - {a_1}y(t - 1) - {a_2}y(t - 2) + {b_1}u(t - 1) +} \hfill \cr{\quad \quad \;{b_2}u(t - 2) + e(t)} \hfill \cr}$$

with a1 = −1.0577, a2 = 0.0575, b1 = −17.49 × 10−5 and b2 = 17.43 × 10−5.

Fig. 3
figure 3

Identification results using the RLS method

Based on RIM, more information can be illustrated (see Figs. 46) to conclude about the model validity. Fig. 4 represents a comparison between the physical system response and the model. It can be seen that the model describes accurately the real system behavior.

Fig. 4
figure 4

Outputs curves

Fig. 5
figure 5

Input-output feature

Fig. 6
figure 6

Identification error

From Fig. 5, the input-output feature indicates that the hysteresis phenomenon, which is the main property of piezoelectric actuators and can be described accurately by the proposed ARX model. From Fig. 6, the maximum relative error is 5%, which is in the accepted rang.

5.1.2 Comparative study

As mentioned before, one of the advantages of RIM is its capability to carry out a comparison between the various identification methods based on the normalized autocorrelation values. The quantification of results are shown in Table 3.

Table 3 Comparison between different identification methods

The comparison can be carried out by two criteria, the first one is the autocorrelation boundaries (|RN|max, |RN|min) for each category. The second one is based on the choice of the model structure. We compare the values of |RN|max and |RN|min for both models ARX and ARMAX.

From Table 3, it can be seen that the ELS and the IVAM methods are more valid than the others and the PEA can be identified by these two techniques. For the second criterion, the |RN|max and |RN|min of the ELS method are the smallest for the methods based on AR-MAX structure. For the methods based on ARX structure, the IVAM has the smallest values of |RN|max and |RN|min.

5.1.3 Effect of identification parameters

RIM also allows user to study the influence of identification parameters on the result. For example, in the case of decreasing gain, the model is valid. But with a constant gain, the model is not valid. A summary result between the different cases, presented in Table 1 using the parameters identification of Table 2, is illustrated in Table 4.

Table 4 Influence of adaptation gain on model validity

5.2 Example 2

Let’s consider the following system

$$\matrix{{A({q^{- 1}})y(t) = B({q^{- 1}})u(t) + e(t)} \hfill \cr{A({q^{- 1}}) = 1 + {a_1}{q^{- 1}} + {a_2}{q^{- 2}} = 1 - 1.70{q^{- 1}} + 0.85{q^{- 2}}} \hfill \cr{B({q^{- 1}}) = {b_1}{q^{- 1}} + {b_2}{q^{- 2}} = 0.32{q^{- 1}} + 0.60{q^{- 2}}.} \hfill \cr}$$

This example can be rewritten as

$$y(t) = {\theta ^{\rm{T}}}\varphi (t) + e(t).$$

The input u(t) is an uncorrelated persistent excitation signal sequence with zero mean and unit variance, and e(t) is a white noise sequence with zero mean and variance 0.5. The data number is chosen as N = 500. Using RIM, the model parameters are estimated applying traditional RLS algorithm, MILS algorithm and MISG algorithm. The identification results for the MILS algorithm with p = 3 are shown in Fig. 7. The complete results are summarized in Table 5.

Fig. 7
figure 7

Identification results using the MILS method (p =3)

Table 5 Identification results of an ARX system

From Table 5, It can be seen that the MILS algorithm presents better results than the traditional RLS and MISG algorithms.

5.3 Example 3

This example considers an OEMA system to evaluate the multi-innovation auxiliary model algorithm performance in parameters estimation.

The system is given by

$$\matrix{{y(t) = {{B({q^{- 1}})} \over {A({q^{- 1}})}}u(t) + D({q^{- 1}})e(t)} \hfill \cr{A({q^{- 1}}) = 1 + {a_1}{q^{- 1}} + {a_2}{q^{- 2}} = 1 - 1.60{q^{- 1}} + 0.80{q^{- 2}}} \hfill \cr{D({q^{- 1}}) = {b_1}{q^{- 1}} + {b_2}{q^{- 2}} = 0.23{q^{- 1}} + 0.70{q^{- 2}}} \hfill \cr{D({q^{- 1}}) = 1 + {d_1}{q^{- 1}} = 1 - 0.70{q^{- 1}}} \hfill \cr}$$

Simulation parameters are the same as those used in the previous example. Using RIM, the estimated parameters with auxiliary model extended least squares (AM-ELS) and multi-innovation auxiliary model extended least squares (MI-AM-ELS) (p = 3, p = 5) algorithms are given in Table 6.

Table 6 Identification results of an OEMA system

From Table 6, the MI-AM-ELS algorithm presents better computational efficiency compared with the AM-ELS algorithm. These results can be improved by increasing the innovation length p.

6 Conclusions

In this paper, we presented a Matlab GUI platform for systems identification. The proposed toolbox deals with the recursive identification methods and contains the necessary tools to validate many theoretical aspects in the domain of system identification.

The RIM toolbox allows users to make simulations without the need to program the identification techniques or to have programming experience in Matlab environment. RIM gives to users the possibility to study the effects of different identification parameters and makes users more insight into the system identification theory.