skip to main content
research-article
Public Access

Sℒ1-Simplex: Safe Velocity Regulation of Self-Driving Vehicles in Dynamic and Unforeseen Environments

Published: 20 February 2023 Publication History

Abstract

This article proposes a novel extension of the Simplex architecture with model switching and model learning to achieve safe velocity regulation of self-driving vehicles in dynamic and unforeseen environments. To guarantee the reliability of autonomous vehicles, an ℒ1 adaptive controller that compensates for uncertainties and disturbances is employed by the Simplex architecture as a verified high-assurance controller (HAC) to tolerate concurrent software and physical failures. Meanwhile, the safe switching controller is incorporated into the HAC for safe velocity regulation in the dynamic (prepared) environments, through the integration of the traction control system and anti-lock braking system. Due to the high dependence of vehicle dynamics on the driving environments, the HAC leverages the finite-time model learning to timely learn and update the vehicle model for ℒ1 adaptive controller, when any deviation from the safety envelope or the uncertainty measurement threshold occurs in the unforeseen driving environments. With the integration of ℒ1 adaptive controller, safe switching controller and finite-time model learning, the vehicle’s angular and longitudinal velocities can asymptotically track the provided references in the dynamic and unforeseen driving environments, while the wheel slips are restricted to safety envelopes to prevent slipping and sliding. Finally, the effectiveness of the proposed Simplex architecture for safe velocity regulation is validated by the AutoRally platform.

1 Introduction

Intelligent transportation systems (ITS) that embed vehicles, roads, traffic lights, and message signs, along with microchips and sensors, are bringing significant improvements in transportation system performance, including reduced congestion, increased safety, and traveler convenience [14]. Intelligent vehicles that aim to improve traffic safety, transport efficiency, and driving comfort are playing a major role in the ITS, among which the longitudinal vehicle dynamics control is an important aspect. Traction control system (TCS) and anti-lock braking system (ABS) are representative technologies for longitudinal vehicle dynamic control systems [22]. Specifically, the ABS is primarily designed to prevent excessive or insufficient wheel slip to keep vehicle steerable and stable during intense braking events, which contributes to high brake performance and road safety [5, 31, 32], while the TCS primarily regulates wheel slip to reduce or eliminate excessive slipping or sliding during vehicle acceleration, which results in better drivability, safety, and traction performance in adverse weather or traffic conditions [8, 10, 12, 30, 38]. Both TCS and ABS are complicated by nonlinearities, uncertainties, and parameter variations, which are induced by variations in the disc pad friction coefficient [18], nonlinear relation between brake torque and pressure [18], nonlinear wheel-slip characteristics [12], among many others. To regulate slip in these challenging scenarios, various model-based control schemes have been proposed, e.g., proportional-integral-derivative control in combination with sliding mode observer [27], fuzzy control [23], model predictive control [8], sliding mode control [17], and \(H_{\infty }\) control [11].
Autonomous velocity regulation has gained vital importance [13, 35, 36], which is motivated by, for example, the imposed speed limits on driving zones (see, e.g., school zone and commercial street) and required relative positions with respect to surrounding vehicles and obstacles for safety and transport efficiency. Velocity regulation needs the vehicle to operate in either drive or brake mode. However, the current frameworks ignore the wheel slip regulation for safety [13, 35, 36], which has always been a common control objective of TCS and ABS. Therefore, the concurrent velocity and slip regulations are indispensable for enhanced safety, drivability, stability, and steerability [5, 8, 10, 12, 30, 31, 32, 38]. Inspired by these observations, this article focuses on safe velocity regulation through integrating the TCS and ABS. More concretely, the vehicle asymptotically steers its angular and longitudinal velocities to the provided references while restricts its wheel slips to the safety envelopes to prevent slipping and sliding during intense braking and accelerating events.
However, as a typical cyber-physical system, self-driving vehicles integrate the vehicular cyber system with the vehicular physical system and the environmental model for control and operation, whose increasing complexity hinders its reliability, especially when system failures occur. The Simplex architecture—using simplicity to control complexity—provides a reliable control system via software approach, whose core idea is to tolerate control software failures [34]. For a self-driving vehicle, its complicated control missions (e.g., traction control and parallel parking) exacerbate the difficulty of keeping the system safe in the presence of physical failures, since the control actuation computed in cyber layer depends on the physical modeling. Moreover, the inaccurate vehicle and/or tire parameters are the main obstacles for preventing wheel slip-based control in TCS and ABS. Hence, the Simplex architecture needs an adaptive controller—compensating for model and parameter uncertainties—as a verified safe controller to tolerate physical failures as well. Among the various adaptive control methods, \(\mathcal {L}_{1}\) adaptive controller has been widely adopted due to its fast adaptation, guaranteed robustness, and predictable transient response [20, 21]. Considering that \(\mathcal {L}_{1}\) adaptive control has been verified consistently with the theory in dealing with physical failures with transient performance and robustness guarantees [4, 9, 25], Wang et al. in Reference [37] proposed an \(\mathcal {L}_{1}\)-Simplex to tolerate concurrent software and physical failures. Inspired by the attractive properties of \(\mathcal {L}_{1}\)-Simplex, this article proposes a variant of \(\mathcal {L}_{1}\)-Simplex for safe velocity regulation of self-driving vehicles, where \(\mathcal {L}_{1}\) adaptive controller works as a verified safe controller that compensates for uncertainties, disturbances, software, and physical failures.
One of the fundamental assumptions of model-based controllers is the availability of a relatively accurate model of the underlying dynamics in consideration. However, vehicle dynamics highly depend on the driving environments [8, 12] and can be significantly different from one road (e.g., asphalt) to another one (e.g., snow). Therefore, a single off-line-built vehicle model cannot capture the differences in the dynamics induced by environmental variations. To address the model mismatch issue, we bring switching control scheme into \(\mathcal {L}_{1}\)-Simplex, where multiple off-line-built models that correspond to different environments (e.g., snow and icy) are stored in \(\mathcal {L}_{1}\) adaptive control architecture, thus yielding the switching \(\mathcal {L}_{1}\) adaptive controller. The switching \(\mathcal {L}_{1}\) adaptive controller aims at the safe velocity regulation in the dynamically changing environmental conditions that can be modeled offline, where each model’s remaining mismatch can be compensated by the \(\mathcal {L}_{1}\) adaptive controller.
Due to the high dependence of vehicle dynamics on driving environments (including, e.g., air mass density, wind velocity, and road friction coefficient [29]), it is unreasonable to expect that the off-line-built multiple models are sufficient to accurately describe the vehicle–environment interaction dynamics in an unforeseen or unprepared environment, as, e.g., the 2019 New York City snow squall [7]. When the unforeseen environments cause deviation from the safety envelope or the uncertainty measurement threshold in the time-critical environment, timely learning and updating the vehicle model using most recent sensor data (generated in the unforeseen environment) is indispensable for safe velocity regulation. To address the problem, we incorporate finite-time model learning into \(\mathcal {L}_{1}\)-Simplex, which can timely learn and update a vehicle model for \(\mathcal {L}_{1}\) adaptive controller in the unforeseen driving environments.
To this end, we propose a novel Switching \(\mathcal {L}_{1}\)-Simplex architecture (S\(\mathcal {L}_{1}\)-Simplex) with the novel incorporation of switching \(\mathcal {L}_{1}\) adaptive controller and finite-time model learning for self-driving vehicles, which is able to achieve
safe velocity regulation in the dynamic and unforeseen driving environments,
safety envelop extending, and
tolerance of concurrent software and physical failures.
This article is organized as follows. In Section 2, we present the preliminaries, including longitudinal vehicle model and the S\(\mathcal {L}_{1}\)-Simplex architecture. The safety envelope is formulated in Section 3. In Section 4, we present the off-line-built vehicle models and the finite-time model learning procedure, based on which we present the S\(\mathcal {L}_{1}\)-Simplex design in Section 5. We present the experiments in Section 6. We finally present our conclusions and future research directions in Section 7.

2 Preliminaries

2.1 Notation

We let \(\mathbb {R}^{2}\) denote the set of two dimensional real vectors. \(\mathbb {R}^{2 \times 2}\) denotes the set of \(2 \times 2\)-dimensional real matrices. \(\mathbb {N}\) denotes the set of natural numbers, and \(\mathbb {N}_{0} = \mathbb {N} \cup \lbrace 0\rbrace\). \({\mathbb {V}} \setminus \mathbb {K}\) denotes the complement set of \(\mathbb {K}\) with respect to \(\mathbb {V}\). \(\mathbf {I}\) and \(\mathbf {1}\), respectively, denote the identity matrix and the vector of all ones, with proper dimensions. For \(x \in \mathbb {R}^{2}\), \(\left\Vert x \right\Vert = \sqrt {x_1^2 + x_2^2}\). For \(A \in \mathbb {R}^{2 \times 2}\), \(\left\Vert A \right\Vert\) denotes the induced 2-norm of a matrix \(A\), \(|| A ||_{\mathrm{F}}\) denotes the Frobenius norm of matrix \(A\). The superscript “\(\top\)” denotes matrix transpose. \(|\mathbb {T}|\) denotes the cardinality (i.e., size) of set \(\mathbb {T}\). We use \(P\) \(\gt\) \((\lt)\) 0 to denote a positive definite (negative definite) matrix \(P\). Given a symmetric matrix \(P\), \(\lambda _{\min }(P)\), and \(\lambda _{\max }(P)\) are the minimum and maximum eigenvalues, respectively. \(\mathcal {L}_{1}\) norm of a function \(x(t)\) is denoted by \(\left\Vert x(t) \right\Vert _{\mathcal {L}_{1}}\), and \({\left\Vert x \right\Vert _{{\mathcal {L}_\infty }\left[ {a,b} \right]}} = {\sup _{a \le t \le b}}\left\Vert {x(t)} \right\Vert\). We denote \(x(s) = \mathfrak {L}\left\lbrace x (t) \right\rbrace\), where \(\mathfrak {L}(\cdot)\) stands for the Laplace transform operator. The gradient of \(f(x)\) at \(x\) is denoted by \(\nabla f(x)\).

2.2 Switching ℒ1-Simplex Architecture

In this subsection, we introduce the Simplex architecture with incorporation of safe switching control and finite-time model learning, which is adopted from \(\mathcal {L}_{1}\)-Simplex proposed in Reference [37]. We first present the assumption on the Simplex architecture for self-driving vehicles.
Assumption 1.
The vehicle is equipped with sensors for real-time environmental perception, which can accurately detect the driving environments.
As described by Figure 1, the proposed S\(\mathcal {L}_{1}\)-Simplex architecture for self-driving vehicles includes
Fig. 1.
Fig. 1. S\(\mathcal {L}_{1}\)-Simplex architecture.
High-Performance Controller (HPC): The HPC is a complex controller that provides high levels of performance and advanced functionalities (e.g., the cautious model predictive control [19], \(\mathcal {L}_1-\mathcal {GP}\) [15], and the end-to-end control via variational autoencoder [6]) and is active during normal operation of the system and possibly not fully verified.
Model Learning and \(\mathcal {L}_{1}\) Based High-Assurance Controller (M\(\mathcal {L}_{1}\)HAC): The M\(\mathcal {L}_{1}\)HAC is a simple and verified controller that provides limited levels of performance and reduced functionalities to guarantee safe and stable operation of the vehicle. As shown in Figure 2, the M\(\mathcal {L}_{1}\)HAC includes
the \(\mathcal {L}_{1}\) adaptive controller, which compensates for uncertainties, disturbances, software, and physical failures for velocity regulation;
the stored off-line-built vehicle models (obtained via, e.g., Gaussian process regression [19]) that vary with environments, which guarantee safe velocity regulation in the dynamic normal (known and prepared) driving environments;
the finite-time model learning, which timely learns and updates the vehicle model for safe velocity regulation in the unforeseen driving environments;
the switching logic that depends on the environmental perception and the real-time verification of safety envelope, which is responsible for activating an off-line-built model or on-line learned model for \(\mathcal {L}_{1}\) adaptive controller.
Uncertainty Monitor: This verified monitor takes the form of the state predictor in \(\mathcal {L}_{1}\) adaptive control architecture, which provides estimates of the uncertainties inside the vehicle system with fast adaptation.
Decision Logic: This verified logic depends on the magnitudes of uncertainty estimations and the real-time verification of the safety envelope, which triggers the switching from HPC to M\(\mathcal {L}_{1}\)HAC in the events of software and/or physical failures and/or large model mismatch occurrence.
M\(\mathcal {L}_{1}\)HAC: \(\mathcal {L}_{1}\)-based HAC architecture with switching control and finite-time model learning.
Remark 1.
In the proposed Simplex architecture, finite-time model learning is running in parallel with M\(\mathcal {L}_{1}\)HAC and HPC, which is depicted in Figure 2. This configuration guarantees that when model learning is needed for reliable decision making, a model that corresponds to the current operating environment is available immediately, so that M\(\mathcal {L}_{1}\)HAC is always in control. If operating without the configuration of parallel running, then the car can lose the control in the unforeseen environments due to the time delay in collecting state samplings and learning.
As shown in Figures 1 and 2, the proposed Simplex includes three types of switching: (1) switching between HPC and M\(\mathcal {L}_{1}\)HAC, (2) switching between stored vehicle models and learned vehicle models, and (3) switching between two subsystems in a fixed normal environment. Therefore, excluding Zeno behaviors is needed to guarantee the feasibility of the proposed framework. To achieve this, we impose a minimum dwell time \({\mathrm{dwell}_{\min }}\) on HPC, M\(\mathcal {L}_{1}\)HAC, learned vehicle models and stored sub-models, i.e.,
\begin{align} \mathop {\min }\limits _{\forall k \in {\mathbb {N}_0}} \left\lbrace {{t_{k + 1}} - {t_k}} \right\rbrace \ge {\mathrm{dwell}_{\min }} \gt 0, \end{align}
(1)
where \(t_{k}\) denotes a switching time.

2.3 Safe Objectives

The proposed Simplex has two objectives of safe control, which are formally stated below.
Safe Objective 1.
The vehicle asymptotically steers its angular and longitudinal velocities to the provided references, while restricts its wheel slips to the safety envelopes in the dynamic and unforeseen driving environments.
Safe Objective 2.
The vehicle control system tolerates the concurrent software and physical failures.

2.4 Vehicle Model

Moving forward, we present the following assumption pertaining to vehicle longitudinal model for our model-based control systems: TCS and ABS. The parameter notations of vehicle model are given in Table 1.
Table 1.
\(w\)Angular velocity
\(v\)Longitudinal velocity
\(J\)Wheel rotational inertia
\(T_{b}\)Brake torque
\(T_{c}\)Friction torque on wheel
\(T_{e}\)Engine torque
\(T_{w}\)Viscous torque on wheel
\(F_{a}\)Longitudinal aerodynamic drag force
\(\zeta\)Aerodynamic drag constant
\(\varrho\)Viscous friction in driven wheel
\(P\)Master cylinder pressure
\(r\)Wheel radius
\(m\)Vehicle mass
\(C\)Brake piston effective area
\(\eta _b\)Pad friction coefficient
\({r_b}\)Brake disc effective radii
\(h\)Gravity center height
Table 1. Vehicle Model Parameters
Assumption 2.
The vehicle’s
dynamics of the left and the right sides are identical (i.e. the vehicle is symmetric);
wheel is damped with a viscous torque [24], i.e.,
\begin{align} T_{w}(t) = \varrho w(t); \end{align}
longitudinal aerodynamic drag force can be linearized in term of longitudinal velocity [24], i.e.,
\begin{align} F_{a}(t) = \zeta v(t). \end{align}
The longitudinal vehicle model is depicted in Figure 3, whose control variables are engine torque and master cylinder pressure. The uncertainties pertaining to the relations (2) and (3) are included in the following dynamics of the vehicle’s longitudinal and wheel motions on a flat road, [29]:
\begin{align} J\dot{w}(t) &= {T_e}(t) - {T_w}(t) - {T_b}(t) - {T_c}(t) + \tilde{f}_{w}(t), \end{align}
(4a)
\begin{align} m\dot{v}(t) &= \frac{{{T_c}(t)}}{r} - {F_{a}}(t) + \tilde{f}_{v}(t), \end{align}
(4b)
Fig. 3.
Fig. 3. Front-wheel-driven vehicle model.
where \(\tilde{f}_{w}(t)\) and \(\tilde{f}_{v}(t)\) represent uncertainties that are due to the modeling errors, noise, disturbances, unmodeled forces/torques in Equations (2) and (3) and others.
Following Reference [18], the actual relation between the master cylinder pressure and the brake torque is modeled by a linear model with uncertainty:
\begin{align} {T_b}(t) = C{\eta _b}{r_b}P(t) + \varpi _{b}(t), \end{align}
(5)
where the unknown \(\varpi _{b}(t)\) denotes the uncertainty.
The wheel slip is defined in term of wheel and longitudinal velocities as
\begin{align} s(t) = \left| {v(t) - rw(t)} \right|. \end{align}
(6)
We note that \(s(t) = 0\) and \(s(t) = \max \left\lbrace {v(t),rw(t)} \right\rbrace\) indicate pure rolling and full sliding, respectively. In this article, the slip will be imposed on velocity tracking control as a safety constraint.
We now define a set of environmental model indices:
\begin{align} \mathbb {E} = \lbrace \mathrm{dry},~\mathrm{wet},~\mathrm{snow},~\ldots ,~\mathrm{icy},~\mathrm{learned}_{1},~\mathrm{learned}_{2},~ \ldots ,~\mathrm{learned}_{q}\rbrace , \end{align}
(7)
for which we further define a subset:
\begin{align} \mathbb {L} = \lbrace \mathrm{learned}_{1},~\mathrm{learned}_{2},~\ldots ,~\mathrm{learned}_{q}\rbrace , \end{align}
(8)
which denotes a set of learned models an unforeseen driving environment, wherein the \(\mathcal {L}_{1}\) controller relies on the learned models. The \(\mathbb {L}\) can also be used to indicate an unforeseen driving environment in the article.
Remark 2.
In our proposed framework, the learned model is not necessarily a single one. For example, assuming the learned model comes from the data generated in the slipping mode, when working in the skidding mode, if the (learned) model mismatch therein leads to a deviation from the safety envelope or the uncertainty measurement threshold, then the finite-time model learning will be triggered to immediately output a learned model to replace the previous leaned one in M\(\mathcal {L}_{1}\)HAC. In the most ideal scenario (as the one in our experiment section), if the model mismatch never triggers the model learning, then the Simplex only uses the first learned model. The ideal scenario means the first learned model can capture critical system properties in both slipping and skidding modes, and the \(\mathcal {L}_{1}\) adaptive controller then compensates for the un-captured system properties, such that the single model is sufficient to enhance safety assurance in the ideal scenarios.
The experimental data of tire friction models shows that the tire friction torque depends on slip (or slip ratio) and road friction coefficient [8, 12]. We also use a linear model with uncertainty to describe the actual relation among the tire friction torque, slip, and road friction coefficient, i.e., \({T_c}(t) = {k_{\sigma (t)}}s(t) + \varpi _{c}(t)\), where \({k_{\sigma (t)}}\) is obtained from experimental data via parameter identification, \(\varpi _{c}(t)\) denotes the unknown uncertainty, and \(\sigma (t) \in \mathbb {E}\), where, e.g., \(\sigma (t)\) = ‘snow‘ for \(t \in [t_{k}, t_{k+1})\) means the vehicle is driving in the snow environment during the time interval \([t_{k}, t_{k+1})\). With the consideration of Equation (6), \({T_c}(t)\) is equivalently expressed as
\[\begin{eqnarray*} {T_c}(t) = {\left\lbrace \begin{array}{ll}(v(t) - rw(t)){{k}_{\sigma (t)}} + {\varpi _c}(t),~~{\rm {if}}\;v(t) \ge w(t)r,~\sigma (t) \in \mathbb {E} \setminus \mathbb {L}\\ (rw(t) - v(t)){{k}_{\sigma (t)}} + {\varpi _c}(t),~~{\rm {if}}\;v(t) \lt w(t)r,~\sigma (t) \in \mathbb {E} \setminus \mathbb {L} \end{array}\right.}\nonumber \nonumber \end{eqnarray*}\]
substituting which together with Equations (2), (3), and (5) into Equation (4) yields a vehicle model with uncertainties:
if \(v(t) \ge w(t)r\), \(\sigma (t) \in \mathbb {E} \setminus \mathbb {L}\)
\begin{align} \dot{w}(t) &= \frac{{{rk_{\sigma (t)}} - \varrho }}{J}w(t) - \frac{{{k_{\sigma (t)}}}}{{J}}v(t) + {u}(t) + {f_w}(t), \end{align}
\begin{align} \dot{v}(t) &= - \frac{{{k_{\sigma (t)}}}}{{m}}w(t) + \frac{{{k_{\sigma (t)}} - \zeta {r}}}{{m{r}}}v(t) + {f_v}(t); \end{align}
if \(v(t) \lt w(t)r\), \(\sigma (t) \in \mathbb {E} \setminus \mathbb {L}\)
\begin{align} \dot{w}(t) &= - \frac{{{rk_{\sigma (t)}} + \varrho }}{J}w(t) + \frac{{{k_{\sigma (t)}}}}{{J}}v(t) + {u}(t) + {f_w}(t), \end{align}
\begin{align} \dot{v}(t) &= \frac{{{k_{\sigma (t)}}}}{{m}}w(t) - \frac{{{k_{\sigma (t)}} + \zeta {r}}}{{m{r}}}v(t) + {f_v}(t), \end{align}
where \({u}(t)\) denotes control input (\({u}(t) \gt 0\) and \({u}(t) \lt 0\) indicate activated drive and brake models, respectively), and
\begin{align} \!\!\!\!{u}(t) = \frac{{{T_e}(t) - C{\eta _b}{r_b}P(t)}}{J},~{f_v}(t) = \frac{{{\varpi _c}(t) + r\tilde{f}_{v}(t)}}{{mr}},~ {f_w}(t) = - \frac{{{\varpi _c}(t) + {\varpi _b}(t) + \tilde{f}_{w}(t)}}{J}. \end{align}
(11)

3 Safety Envelopes

This article considers velocity regulation via tracking the provided references of longitudinal and angular velocities, denoted by \(\mathbf {v}^\mathrm{r}_\sigma\) and \(\mathbf {w}^\mathrm{r}_\sigma\), respectively. For the Safe Objective 1, the provided velocity references are required to satisfy the following condition:
\begin{align} \left|\mathbf {v}^\mathrm{r}_{\sigma (t)} - r\mathbf {w}^\mathrm{r}_{\sigma (t)}\right| - \mathfrak {a}_{\sigma (t)} = 0, {\mathfrak {a}_{\sigma (t)}} = \left\lbrace \begin{gathered}\frac{{\zeta r\mathrm{v}_{\sigma (t)}^{\text{r}}}}{{{k_{\sigma (t)}}}}, \sigma (t) \in \mathbb {E} \setminus \mathbb {L} \\ {\breve{\mathfrak {a}}_{\sigma (t)}}, \sigma (t) \in \mathbb {L} \\ \end{gathered}, \right. \end{align}
(12)
where \(\mathbb {E}\) and \(\mathbb {L}\) are defined in Equations (7) and (8), respectively.
Remark 3.
We note that vehicle parameters \(\zeta\), \(r\), and \(k_{\sigma (t)}\) in Equation (12) indicate that the velocity and slip references depend on the vehicle model. Therefore, the slip reference \({\breve{\mathfrak {a}}_{\text{learned}}}\) in the unforeseen environments depends on the learned models, which is determined according to Equation (41), such that we can obtain the tracking error dynamics (Equation (47)).
The relation (12), in conjunction with Equation (6), indicates that the slip reference is
\begin{align} {s_{\sigma (t)}^\mathrm{r}} = \left| {\mathbf {v}^\mathrm{r}_{\sigma (t)} - r\mathbf {w}^\mathrm{r}_{\sigma (t)}} \right| = \mathfrak {a}_{\sigma (t)}, \end{align}
(13)
which depends on the driving environment indexed by \({\sigma (t)}\).
The velocity tracking error vector is obtained as
\begin{align} {e}(t) = [ {{e_w}(t),~{e_v}(t)} ]^\top = [w(t),~v(t)]^\top - \left[\mathbf {w}^\mathrm{r}_{\sigma (t)},~\mathbf {v}^\mathrm{r}_{\sigma (t)}\right]^\top , \end{align}
(14)
considering which and Equation (12), we have
\begin{align} \left| {rw(t) - v(t)} \right| &= \left| {rw(t) - v(t) - r\mathbf {w}^\mathrm{r}_{\sigma (t)} + \mathbf {v}^\mathrm{r}_{\sigma (t)} + \mathfrak {a}_{\sigma (t)}} \right| \nonumber \nonumber\\ &= \left| {{e_v(t)} - {re_w(t)} - \mathfrak {a}_{\sigma (t)}} \right|, \mathbf {v}^\mathrm{r}_{\sigma (t)} \lt r\mathbf {w}^\mathrm{r}_{\sigma (t)}, \end{align}
(15)
\begin{align} \left| {rw(t) - v(t)} \right| &= \left| {v(t) - rw(t) - \mathbf {v}^\mathrm{r}_{\sigma (t)} + r\mathbf {w}^\mathrm{r}_{\sigma (t)} + \mathfrak {a}_{\sigma (t)}} \right| \nonumber \nonumber\\ &= \left| {{e_v(t)} - {re_w(t)} + \mathfrak {a}_{\sigma (t)}} \right|, \mathbf {v}^\mathrm{r}_{\sigma (t)} \ge r\mathbf {w}^\mathrm{r}_{\sigma (t)}. \end{align}
(16)
In addition to velocity regulation for collision avoidance, lane keeping, and other constraints, the wheel slip \(s(t)\) defined in Equation (6) should be below a safety boundary \(\mu _{\sigma (t)}\) to prevent slipping and sliding, i.e.,
\begin{align} s(t) = \left|rw(t) - v(t) \right| \le {\mu _{\sigma (t)}}, \end{align}
(17)
which in light of Equations (15) and (16) can be equivalently expressed as
\begin{align} &- {\mu _{\sigma (t)}} + \mathfrak {a}_{\sigma (t)} \le {e_v(t)} - r{e_w(t)} \le {\mu _{\sigma (t)}} + \mathfrak {a}_{\sigma (t)}, \text{if}~\mathbf {v}^\mathrm{r}_{\sigma (t)} \lt r\mathbf {w}^\mathrm{r}_{\sigma (t)}, \end{align}
(18)
\begin{align} &- {\mu _{\sigma (t)}} - \mathfrak {a}_{\sigma (t)} \le {e_v(t)} - r{e_w(t)} \le {\mu _{\sigma (t)}} - \mathfrak {a}_{\sigma (t)}, \text{if}~\mathbf {v}^\mathrm{r}_{\sigma (t)} \ge r\mathbf {w}^\mathrm{r}_{\sigma (t)}. \end{align}
(19)
Based on Equations (19) and (18), we define a set of vectors:
\begin{align} {\widehat{c}_{\sigma \left(t \right)}} = {\left[- \frac{1}{{{\mu _{\sigma (t)}} - {\mathfrak {a}_{\sigma (t)}}}},\frac{1}{{\left({{\mu _{\sigma (t)}} - {\mathfrak {a}_{\sigma (t)}}} \right)r}} \right]^\top }, \sigma (t) \in \mathbb {E}, \end{align}
(20)
by which we obtain the following lemma regarding safety formula.
Lemma 3.1.
The safety condition (17) holds if
\begin{align} -1 &\le {\widehat{c}}^{\top }_{\sigma (t)} e(t) \le 1, \end{align}
(21)
\begin{align} 0 &\le \mathfrak {a}_{\sigma (t)} \lt {\mu _{\sigma (t)}}, \end{align}
(22)
where \(\mathfrak {a}_{\sigma (t)}\), \(e(t)\), and \({\widehat{c}}_{\sigma (t)}\) are given in Equations (12), (14), and (20), respectively.
Proof.
Substituting Equation (20) into Equation (21) yields \(- 1 \le \frac{{{e_v}\left(t \right)}}{{\left({{\mu _{\sigma (t)}} - {\mathfrak {a}_{\sigma (t)}}} \right)r}} - \frac{{{e_w}\left(t \right)}}{{{\mu _{\sigma (t)}} - {\mathfrak {a}_{\sigma (t)}}}} \le 1\), which, in conjunction with condition (22), leads to
\begin{align} - {\mu _\sigma } + \mathfrak {a}_{\sigma (t)} \le \frac{{e_v(t)}}{r} - {e_w(t)} \le {\mu _\sigma } - \mathfrak {a}_{\sigma (t)}. \end{align}
(23)
It straightforwardly follows from \({\mu _{\sigma (t)}} - \mathfrak {a}_{\sigma (t)} \le {\mu _\sigma } + \mathfrak {a}_{\sigma (t)}\) that Equation (23) implies Equations (19) and (18). Moreover, the inequalities (19) and (18) equivalently describe the safety condition (17) via the transformations (15) and (16). We thus conclude that Equation (17) holds if Equations (21) and (22) are satisfied.□
Building on Equation (21), safety constraint set for ideal vehicle models is defined as follows:
\begin{align} \Omega _{\sigma (t)} = \left\lbrace {\left.{e(t) \in {\mathbb {R}^2}} \right|c^{\top }_{\sigma (t)}e(t) \le 1} \right\rbrace ,~\sigma (t) \in \mathbb {E}, \end{align}
(24)
where we define
\begin{align} c_{\sigma (t)} = {\left\lbrace \begin{array}{ll}\widehat{c}_{\sigma (t)},&{\rm {if}}\;v(t) \ge rw(t)\\ -\widehat{c}_{\sigma (t)},&{\rm {if}}\;v(t) \lt rw(t). \end{array}\right.} \end{align}
(25)
In addition, we define the following invariant sets and boundary sets, which will be used to determine the safety envelopes:
\begin{align} {\Phi _{\sigma (t)}} &= \left\lbrace {\left. {e(t) \in {\mathbb {R}^2}} \right|{e^\top (t)}{\bar{P}_{\sigma (t)}}e(t) \le 1, {{\bar{P}}_{\sigma (t)}} \gt 0} \right\rbrace ,~\sigma (t) \in \mathbb {E} \end{align}
(26)
\begin{align} \partial {\Phi _{\sigma (t)}} &= \left\lbrace {\left. {e(t) \in {\mathbb {R}^2}} \right|{e^\top (t)}{\bar{P}_{\sigma (t)}}e(t) = 1, {{\bar{P}}_{\sigma (t)}} \gt 0} \right\rbrace ,~\sigma (t) \in \mathbb {E}. \end{align}
(27)
The following lemma provides a condition under which \(\Phi _{\sigma (t)}\) is a subset of safety set \(\Omega _{\sigma (t)}\), which will be used for safe velocity regulation.
Lemma 3.2.
Consider the safety sets (24) and (26). \(\Phi _{\sigma (t)}\) \(\subseteq\) \(\Omega _{\sigma (t)}\) holds if and only if \({\widehat{c}}^{\top }_{\sigma (t)}\bar{P}^{-1}_{\sigma (t)}{\widehat{c}}_{\sigma (t)} \le 1\), \(\sigma (t) \in \mathbb {E}\), where \({\widehat{c}}_{\sigma (t)}\) is given in Equation (20).
Proof.
It is straightforward to verify from Equation (25) that \({c}^{\top }_{\sigma (t)}\bar{P}^{-1}_{\sigma (t)a}{c}_{\sigma (t)} = \widehat{c}^{\top }_{\sigma (t)}\bar{P}^{-1}_{\sigma (t)}\widehat{c}_{\sigma (t)}\). Then, the rest of the proof is the same as that of Lemma 4.1 in Reference [33] (through letting \(x = e(t)\), \(P = \bar{P}^{-1}_{\sigma (t)}\) and \(\alpha _{k} = c_{\sigma (t)}\)); here it is omitted.□
In light of Lemma 3.2, the safety invariant set (26) and the safety boundary set (27), we present the safety envelopes for vehicles driving in different environments:
\begin{align} \underline{\text{Safety Envelopes}:}~~~~~~~~~~~~~~~~~~~ \Theta _{\sigma } = \left\lbrace {\left. {e \in {\mathbb {R}^2}} \right|{e^\top }{\bar{P}{_{\sigma)}}}e \le \theta ~\text{and} \mathop {\min }\limits _{y \in \partial {\Phi {_{\sigma }}}} \left\Vert {e - y} \right\Vert \ge {\varepsilon }}\right\rbrace , \end{align}
(28)
where \(0 \lt \theta \lt 1\) and \(0 \lt \varepsilon \lt 1\).

4 Model Switching and Model Learning

As shown in Figure 2, the operation of M\(\mathcal {L}_{1}\)HAC relies on the off-line-built vehicle models corresponding to the prepared environments and the learned models from finite-time model leaning in an unforeseen driving environment.

4.1 Off-Line-Built Switching Models

The stored off-line-built switching models are straightforwardly obtained from Equations (9) and (10) via dropping the uncertainties as follows:
if \(v(t) \ge rw(t)\),
\begin{align} \dot{\bar{\mathrm{w}}}(t) = \frac{{{rk_{\sigma (t)}} - \varrho }}{J}\bar{\mathrm{w}}(t) - \frac{{{k_{\sigma (t)}}}}{{J}}\bar{\mathrm{v}}(t) + \bar{\mathrm{u}}(t), \dot{\bar{\mathrm{v}}}(t) = - \frac{{{k_{\sigma (t)}}}}{{m}}\bar{\mathrm{w}}(t) + \frac{{{k_{\sigma (t)}} - \zeta {r}}}{{m{r}}}\bar{\mathrm{v}}(t); \end{align}
if \(v(t) \lt rw(t)\),
\begin{align} \dot{\bar{\mathrm{w}}}(t) = - \frac{{{rk_{\sigma (t)}} + \varrho }}{J}\bar{\mathrm{w}}(t) + \frac{{{k_{\sigma (t)}}}}{{J}}\bar{\mathrm{v}}(t) + \bar{\mathrm{u}}(t), \dot{\bar{\mathrm{v}}}(t) = \frac{{{k_{\sigma (t)}}}}{{m}}\bar{\mathrm{w}}(t) - \frac{{{k_{\sigma (t)}} + \zeta {r}}}{{m{r}}}\bar{\mathrm{v}}(t). \end{align}
We now define
\begin{align} \bar{\mathrm{x}}(t) = \left[ \begin{gathered}\bar{\mathrm{w}}(t) \\ \bar{\mathrm{v}}(t) \\ \end{gathered} \right]\!,~B = \left[ \begin{gathered}1 \\ 0 \\ \end{gathered} \right]\!,~{A_{\sigma _{1}(t)}} = \left[ \begin{array}{*{20}{c}} {\frac{{{rk_{\sigma (t)} - \varrho }}}{J}} & { - \frac{{{k_{\sigma (t)}}}}{{J}}} \\ { - \frac{{{k_{\sigma (t)}}}}{{m}}} & {\frac{{{k_{\sigma (t)} - r\zeta }}}{{m{r}}}} \end{array} \right]\!,~{A_{\sigma _{2}(t)}} = \left[ \begin{array}{*{20}{c}} { - \frac{{{rk_{\sigma (t)} + \varrho }}}{J}} & {\frac{{{k_{\sigma (t)} }}}{{J}}} \\ {\frac{{{k_{\sigma (t)} }}}{{m}}} & { - \frac{{{k_{\sigma (t)} + r\zeta }}}{{m{r}}}} \end{array} \right]\!, \end{align}
(31)
by which, the off-line-built switching models, consisting of Equations (29) and (30), are rewritten as
\begin{align} \dot{\bar{\mathrm{x}}}(t) &= {A_{\widetilde{\sigma }(t)}}\bar{\mathrm{x}}(t) + B\bar{\mathrm{u}}(t), \widetilde{\sigma }(t) = {\left\lbrace \begin{array}{ll}\sigma _{1}(t),&v(t) \ge w(t)r, \sigma (t) \in \mathbb {E} \setminus \mathbb {L}\\ \sigma _{2}(t),&v(t) \lt w(t)r, \sigma (t) \in \mathbb {E} \setminus \mathbb {L}\\ \sigma (t) \in \mathbb {L}. \\ \end{array}\right.}. \end{align}
(32)
Meanwhile, the real vehicle dynamics described by Equations (9) and (10) is rewritten as
\begin{align} \dot{x}(t) &= {A_{\widetilde{\sigma }(t)}}x(t) + Bu(t) + f_{0}(x, t), \end{align}
(33)
where \(f_{0}(x, t) = \left[ {f_{w}(t),f_{v}(t)} \right]^\top\).
Remark 4.
We handle the un-modeled forces/torques, e.g., rolling resistance forces, as uncertainties, which will be compensated by \(\mathcal {L}_{1}\) adaptive controller in M\(\mathcal {L}_{1}\)HAC.

4.2 Finite-time Model Learning

4.2.1 Model Learning Procedure.

The unknown and unmeasured environmental characteristics can potentially lead to large mismatch between the off-line-built vehicle-environment interaction model (32) and real vehicle behaviors in unforeseen environments. Subsequently, the control action cannot be reliable. This motivates to employ finite-time model learning to timely learn and update a vehicle model using the most recent sensor data generated in the unforeseen environment.
Without loss of generality, the real vehicle dynamics in an unforeseen environment is written as
\begin{align} \dot{x}(t) &= {A_{\text{learned}}}x(t) + B_{\text{learned}}\widehat{u}(t) + f_{1}(x,t),~~~~~~~~~~~~~~~~~\text{learned} \in \mathbb {L}, \end{align}
(34)
where \(x(t) = [w(t),~v(t)]^{\top }\), \(B_{\text{learned}} = [b_{\text{learned}},~0]^{\top }\), \(\widehat{u}(t) \in \mathbb {R}\) is the control input, and \(f_{1}(x,t)\) denotes the uncertainty. The data sampling technique transforms the continuous-time dynamics (34) to the discrete-time one,
\begin{align} x(q + 1) = ({\mathbf {I} + T{A_{\text{learned}}}})x(q) + T B_{\text{learned}}\widehat{u}(q) + Tf_{1}(x,q), y({q}) = x({q}) + \mathbf {o}(q), \end{align}
(35)
where \(T\) is the sampling period, \(y\left({q} \right)\) is the observed sensor data, \(\mathbf {o}\left(q\right)\) is the observation/sensing noise, \(q \in \lbrace k, k+1, \ldots , k+m \rbrace\), and
\begin{align} k = \frac{{{t - \kappa }}}{T},m = \frac{\kappa }{T}. \end{align}
(36)
Remark 5.
The \(m\) in Equation (36) denotes the number of collected sensor data. It follows from Equation (36) that \(kT = t-\kappa\) and \(({k + m})T = t\), which indicates that the state samplings in the time interval \([t-\kappa , t]\) are used to learn the vehicle model denoted by \(({A_{\text{learned}}}, {B_{\text{learned}}})\).
Moving forward, we introduce the following:
\begin{align} \widehat{A} &= {\mathbf {I} + T{A_{\text{learned}}}}, \widehat{B} = T B_{\text{learned}}, \widehat{u}(p) \equiv \mathfrak {u}, \forall p \in \lbrace k, k+1, \ldots , k+m \rbrace , \end{align}
(37)
where the third term in Equation (37) means that the control input keeps constant for the sake of learning.
With the consideration of Equation (37), following the finite-time learning procedure developed in Reference [28] we have
\begin{align} \widehat{A}_{\text{learned}} = \breve{Q}{\breve{P}^{ - 1}}, \widehat{B}_{\text{learned}}\mathfrak {u} = \frac{1}{m}\sum \limits _{z = k}^{k+m-1} {\left({y\left({z + 1} \right) - {{\widehat{A}}_{\text{learned}}}y\left(z \right)} \right)}, \end{align}
(38)
where \(\widehat{A}_{\text{learned}}\) and \(\widehat{B}_{\text{learned}}\) denote the learned ones corresponding to \(\widehat{A}\) and \(\widehat{B}\), and
\[\begin{eqnarray*} \breve{P} = \sum \limits _{p = k}^{p - 2} {\sum \limits _{p \lt q}^{p-1} {\mathbf {y}_p^q{{\left({\mathbf {y}_p^q} \right)^\top }}}}, \breve{Q} = \sum \limits _{p = k}^{p - 2} {\sum \limits _{p \lt q}^{p-1} {\mathbf {y}_{p + 1}^{q+1}{{\left({\mathbf {y}_p^q} \right)^\top }}} }, \mathbf {y}^{q}_{p} = y(p) - y(q). \nonumber \nonumber \end{eqnarray*}\]
Recalling Equations (37) and (38), the learned model is obtained as
\begin{align} {\breve{A}_{{\text{learned}}}} &= \frac{1}{T}({{{\widehat{A}}_{{\text{learned}}}} - \mathbf {I}}) = \frac{1}{T}({\breve{Q}{\breve{P}^{ - 1}} - \mathbf {I}}), \end{align}
(39a)
\begin{align} {\breve{B}_{{\text{learned}}}} &= \frac{\sum \nolimits _{z = k}^{k+m-1} {({y(z + 1) - {{\widehat{A}}_{{\text{learned}}}}y(z)})}}{{mT\mathfrak {u}}} = \frac{\sum \nolimits _{z = k}^{k+m-1} {({y(z + 1) - \breve{Q}{\breve{P}^{ - 1}}y(z)})}}{{mT\mathfrak {u}}}. \end{align}
(39b)
The learned vehicle model in an unforeseen environment for \(\mathcal {L}_{1}\) adaptive controller is thus described as
\begin{align} \dot{\mathrm{x}}(t) &= {\breve{A}_{\text{learned}}}\mathrm{x}(t) + \breve{B}_{\text{learned}}\bar{\mathrm{u}}(t), \text{with}~\breve{A}_{\text{learned}} = \left[ \begin{array}{*{20}{c}} {a_{\text{learned}}^{11}}&{a_{\text{learned}}^{12}} \\ {a_{\text{learned}}^{21}}&{a_{\text{learned}}^{22}} \end{array} \right]. \end{align}
(40)
With the consideration of the relation (12) and the learned \(A_{\text{learned}}\) in Equation (40), the chosen velocity and slide references for safe velocity regulation are required to satisfy
\begin{align} {a_{\text{learned}}^{21}}\mathbf {w}^\mathrm{r}_{\text{learned}} + {a_{\text{learned}}^{22}}\mathbf {v}^\mathrm{r}_{\text{learned}} = 0,\left| {r\mathbf {w}^\mathrm{r}_{\text{learned}} - \mathbf {v}^\mathrm{r}_{\text{learned}}} \right| - \breve{\mathfrak {a}}_{\text{learned}} = 0. \end{align}
(41)

4.2.2 Sample Complexity.

Due to modeling uncertainty and sampling noise, one intuitive question pertaining to the accuracy of model learning arises: Given the sampling frequency, how many samplings are sufficient for the learned model to achieve the prescribed levels of accuracy and confidence? To answer the question, we present the sample complexity analysis of the proposed model learning.
We let \(\mathfrak {s}_{i}\left(A_{\text{learned}}\right)\) denote the \(i\)th singular value of matrix \(A_{\text{learned}}\), based on which we assume the following bounds pertaining to \(\mathfrak {s}_{i}\left(A_{\text{learned}}\right)\) are known:
\[\begin{eqnarray*} \widehat{\underline{\mathfrak {s}}} &\le \mathop {\min }\left\lbrace {{\mathfrak {s}_1}\left(A_{\text{learned}} \right), {\mathfrak {s}_2}\left(A_{\text{learned}} \right)} \right\rbrace ,\nonumber \nonumber\\ \widetilde{\underline{\mathfrak {s}}} &\le \mathop {\min }\limits _{z \in \left\lbrace {k + 1, \ldots , k+m} \right\rbrace } \left\lbrace {{{\left| {\mathfrak {s}_1^{z - k}\left(A_{\text{learned}} \right) - 1} \right|}}}, {{{\left| {\mathfrak {s}_2^{z - k}\left(A_{\text{learned}} \right) - 1} \right|}}} \right\rbrace ,\nonumber \nonumber\\ \overline{\widetilde{\mathfrak {s}}} &\ge \left\Vert A_{\text{learned}} \right\Vert _{\mathrm{F}}, {\overline{\widehat{\mathfrak {s}}}}_{A_{\text{learned}}} \ge \mathop {\max }\limits _{z \in \left\lbrace {k + 1, \ldots ,p} \right\rbrace } \left\lbrace {{{\left\Vert {{A_{\text{learned}}^{k - 1}} - {A_{\text{learned}}^{z - 1}}} \right\Vert _{\mathrm{F}}}}} \right\rbrace .\nonumber \nonumber \end{eqnarray*}\]
With the practical knowledge at hand, the sample complexity analysis is formally presented in the following theorem.
Theorem 4.1.
[28] For any \(\varepsilon \in [0, 1)\), and any \(\rho , \delta \in (0, 1)\), and any \(\phi \gt 0\), we have \(\mathbf {P} (||\breve{A}_{\mathrm{learned}} - {A}_{\mathrm{learned}}|| \le \phi) \ge 1 - \delta\), as long as the following hold:
\begin{align} &\min \left\lbrace {\frac{{{(1 - \varepsilon)^2\rho ^2}}}{{\mathfrak {n}\mathfrak {p}^{2}|| {{{\mathcal {C}}_\mathrm{v}}} ||}},\frac{{(1 - 2\varepsilon)\rho }}{{\mathfrak {p}}}} \right\rbrace \ge \frac{{\gamma ^2}}{2}\ln \frac{{4 {({\frac{2}{\varepsilon } + 1})^2}}}{\delta }, \end{align}
(42)
\begin{align} &\phi \ge \sqrt {\frac{{8c{\kappa ^2}}}{{{\left({1 - \rho } \right)\mathfrak {f}_{2}l_{\mathrm{up}}}}}\ln {\frac{(2+\rho)^{2}}{\delta \rho ^{2} }}}, \end{align}
(43)
where
\begin{align} &{\mathfrak {n}} = \sum \limits _{z = k}^{k+m-1} {\left({({z + 1})(m+k-z) + \frac{(m+k-z)(m+k-z+1)}{2}}\right)},\\ &\eta = [\breve{\mathbf {x}}^{\top }(1),~~{{\widehat{{\bf w}}}^{\top }},~~\breve{\mathbf {f}}^\top ,~~ \widetilde{\mathbf {f}}^\top ]^{\top }, \mathcal {C}_{\mathrm{v}} = {{\bf E}}[\eta \eta ^{\top }], \mathfrak {p} = \frac{{{{\sqrt {\mathfrak {n}}\sqrt {2}}{\mathfrak {g}^2}}}}{{\sum \nolimits _{r = k}^{k+m - 1} {\sum \nolimits _{q = r+1}^{k+m} {{\mathfrak {f}_{\left({r,q} \right)}}} } }}, \nonumber \nonumber\\ &\mathfrak {f}_{(r,q)} = {\left\lbrace \begin{array}{ll}\mathfrak {f}_{1}, &\text{if}~q \gt 2r- 1\\ \mathfrak {f}_{2}, &\text{if}~q \le 2r- 1 \end{array}\right.}, \mathfrak {f}_{1} = \widehat{\underline{\mathfrak {s}}}^{2k - 2}\widetilde{\underline{\mathfrak {s}}}^{2}\sigma _{\mathrm{i}}^2 + \widehat{\underline{\mathfrak {s}}}^{2k-2}\mathfrak {v}_{\rm {p}}^2 + 2\mathfrak {v}_{\mathrm{o}}^2, \mathfrak {f}_{2} = \widehat{\underline{\mathfrak {s}}}^{2k - 2}\widetilde{\underline{\mathfrak {s}}}^{2} \mathfrak {v}_{\mathrm{i}}^2 \!+\! 2\mathfrak {v}_{\mathrm{o}}^2, \nonumber \nonumber\\ &\mathfrak {g} = 1 + \overline{\widehat{\mathfrak {s}}} + \mathop {\max }\limits _{q \in \left\lbrace {k, \ldots ,k+m - 1} \right\rbrace }\left\lbrace {\frac{{{\overline{\widetilde{\mathfrak {s}}}} - \overline{\widetilde{\mathfrak {s}}}^{q - 1}}}{{1 - {\overline{\widetilde{\mathfrak {s}}}}}}}\right\rbrace + \mathop {\max }\limits _{j \lt r \in \left\lbrace {k, \ldots ,k+m - 1} \right\rbrace }\left\lbrace {\frac{{\overline{\widetilde{\mathfrak {s}}}^{j - 1} -\overline{\widetilde{\mathfrak {s}}}^{r - 1}}}{{1 - {\overline{\widetilde{\mathfrak {s}}}}}}}\right\rbrace ,\nonumber \nonumber \end{align}
(44)
with \(\mathfrak {v}_{\mathrm{p}}^2\), \(\mathfrak {v}_{\mathrm{o}}^2\), and \(\mathfrak {v}_{\mathrm{i}}^2\) respectively denoting the variances of \(Tf_{1}(x,q)\), \(\mathbf {o}(q)\) and the initial condition of the dynamics (35), and
\[\begin{eqnarray*} &\breve{\mathbf {x}}(1) = \left[ {{x^\top }(1),~{x^\top }(1),~\ldots ,~{x^\top }(1)} \right]\!^{\top } \in {\mathbb {R}^{\sum \nolimits _{r = k}^{k+m - 1} \!\!{2(k+m-r)} }},\nonumber \nonumber\\ &{\widehat{\mathbf {w}}}_{k} = [ {{{(\mathbf {o}(k)-\mathbf {o}(k+1))^\top }}\!, \ldots , {{(\mathbf {o}(k)-\mathbf {o}(k+m))^\top }}} ], \nonumber \nonumber\\ &{{\widehat{{\bf w}}}} = [ {{\widehat{{\bf w}}}_k,~ {\widehat{{\bf w}}}_{k + 1},~ \ldots ,~ {\widehat{{\bf w}}}_{k+m-2},~ {\widehat{{\bf w}}}_{k+m-1}}]^{\top },\nonumber \nonumber\\ &\breve{\mathbf {f}}^{r}_{k} = [ {{{(Tf_{1}(x,k-1) - Tf_{r}(x,r-1))^\top }},~ \ldots ,~{{(Tf_{1}(x,r-k+1) - Tf_{r}(x,1))^\top }}}],\nonumber \nonumber \end{eqnarray*}\]
\[\begin{eqnarray*} &\breve{\mathbf {f}}_k = [ {{{{\breve{\mathbf {f}}_{k}^{k+1}}}},~{{{\breve{\mathbf {f}}_{k}^{k+2}}}},~\ldots ,~{{{\breve{\mathbf {f}}_k^{k+m-1}}}}}],\nonumber \nonumber\\ &\breve{\mathbf {f}} = [ {\breve{\mathbf {f}}_k,~\breve{\mathbf {f}}_{k + 1},~\ldots ,~\breve{\mathbf {f}}_{p - 2},~\breve{\mathbf {f}}_{p - 1}}]^{\top },\nonumber \nonumber\\ &\widetilde{\mathbf {f}}_k^r = [ {(Tf_{1}(x,r-k) + T B_{\text{learned}}\mathfrak {u})^\top , \!~\ldots , \!~(Tf_{1}(x,1) + T B_{\text{learned}}\mathfrak {u})^\top }],\nonumber \nonumber\\ &\widetilde{\mathbf {f}}_k = [ {\widetilde{\mathbf {f}}_k^{k+1},~\widetilde{\mathbf {f}}_k^{k+2},~\ldots ,~\widetilde{\mathbf {f}}_k^{k+m}} ],\nonumber \nonumber\\ &\widetilde{\mathbf {f}} = [ {\widetilde{\mathbf {f}}_k,~~\widetilde{\mathbf {f}}_{k + 1},~~ \ldots ,~~\widetilde{\mathbf {f}}_{p - 2},~~\widetilde{\mathbf {f}}_{k+m-1}}]^{\top }.\nonumber \nonumber \end{eqnarray*}\]
Remark 6.
Due to page limit, we refer readers to Reference [28] for the more detailed assumptions of Theorem 4.1. The parameter \(\gamma\) in Equation (42) comes from an assumption in Reference [28] that the distribution of entries of the vector \(T(f_{1}(x,k) - f_{1}(x,r)) + \mathbf {o}(k+1) - \mathbf {o}(r+1) - A(\mathbf {o}(k)-\mathbf {o}(r))\) is conditionally \(\gamma\)-sub-Gaussian.
Remark 7.
Our proposed finite-time model learning is mainly used in the safety-critical and time-critical environments for fast online model updating, when the off-line stored models have a large mismatch with the real system due to unforeseen operating environments or black swan–type events. In the challenging environments, before the availability of the learned model, we do not update the (model based) control command, since without the relatively accurate system model, the computed (model based) control command cannot be not regarded as reliable. However, before collecting the most recent data for learning, we must know in advance how many real-time samples from current trajectory generated in the challenging environments are sufficient for the learned model to satisfy the prescribed levels of accuracy and confidence. With the consideration of Equation (43), the number of real-time samples, i.e., \(m = l_{\mathrm{up}}\), for model learning, should ensure that Equations (42) and (43) hold, such that the prescribed levels of accuracy \(\phi\) and confidence \(1 - \delta\) of learned model can be guaranteed.

5 Sℒ1-Simplex Architecture Design

The off-line-built switching models and on-line leaned models constitute a backbone of S\(\mathcal {L}_{1}\)-Simplex. Sections 3 and 4 have paved the way to the design of S\(\mathcal {L}_{1}\)-Simplex, which is carried out in this section. In this section, we first investigate the safe switching control of the off-line-built and on-line leaned models, which will work as the references of vehicle’s safe behaviors for the \(\mathcal {L}_{1}\) adaptive controller to track.

5.1 Safe Switching Control

This section investigates safe switching control of off-line-built and on-line learned vehicle models. The safe velocity regulation control for the off-line-built model (32) and on-line learned model (40) is designed as
\begin{align} \bar{\mathrm{u}}(t) = - F_{\widetilde{\sigma }(t)}^w{\bar{\mathrm{e}}_w}(t) - F_{\widetilde{\sigma }(t)}^v{\bar{\mathrm{e}}_v}(t) + \mathfrak {b}_{\sigma (t)}, \end{align}
(45)
where \({\bar{\mathrm{e}}_w}(t) = \mathrm{w}(t) - \mathbf {w}_{\sigma (t)}^{\text{r}}\), \({\bar{\mathrm{e}}_v}(t) = \mathrm{v}(t) - \mathbf {v}_{\sigma (t)}^{\text{r}}\), \(\widetilde{\sigma }(t)\) is given in Equation (32), \(F_{{\widetilde{\sigma }}(t)}^w\) and \(F_{{\widetilde{\sigma }}(t)}^v\) are the designed control gains, and
\begin{align} \mathfrak {b}_{\sigma (t)} = \left\lbrace \begin{gathered}\frac{{\zeta \mathbf {v}_{\sigma (t)}^{\text{r}} + \varrho \mathbf {w}_{\sigma (t)}^{\text{r}}}}{J}, {{\sigma (t)}} \in {\mathbb {E}{\setminus } \mathbb {L}} \\ a_{\sigma (t)}^{11}\mathbf {w}_{\sigma (t)}^{\text{r}} + a_{\sigma (t)}^{12}\mathbf {v}_{\sigma (t)}^{\text{r}}, {\sigma (t)} \in \mathbb {L}. \\ \end{gathered} \right. \end{align}
(46)
Substituting Equations (15) and (16) under the constraints (12) and (41) into the models (32) and (40) with the control input (45) yields the tracking error dynamics,
\begin{align} \dot{\bar{\mathrm{e}}}(t) = ({{A_{\breve{\sigma }(t)}} + B_{\sigma \left(t \right)}F_{\breve{\sigma }(t)}})\bar{\mathrm{e}}(t), \bar{\mathrm{e}}({t_k}) = {E_k}\bar{\mathrm{e}}({{t^{-}_k}}), \bar{\mathrm{e}}({t_0^ + }) = \bar{\mathrm{e}}({{t_0}}), \end{align}
(47)
where \({E_k}\) is due to the impulse effect induced by velocity reference switching and
\begin{align} {B_{\sigma \left(t \right)}} = \left\lbrace \begin{array}{l} \left[ \begin{array}{*{20}{c}} 1&1 \\ 0&0 \end{array} \right] = \widehat{B}, {{\sigma (t)}} \in {\mathbb {E}{\setminus } \mathbb {L}}\\ \left[ \begin{array}{*{20}{c}} a&a \\ 0&0 \end{array} \right] = \breve{B}_{\sigma (t)},{\sigma (t)} \in \mathbb {L}. \end{array} \right. \end{align}
(48)
With the defined \(\widehat{B}\) and \(\breve{B}_{\text{learned}}\) in Equation (48), we present the LMI formula for computing \(F_{\breve{\sigma }}\) and \(P_{{\sigma }}\) that guarantee safe velocity regulation,
\begin{align} &{Q_\sigma } \gt 0, \forall \sigma \in \mathbb {E}, \end{align}
(49a)
\begin{align} &\widehat{c}^\top _\sigma {Q_\sigma }{\widehat{c}_\sigma } \le 1, \forall \sigma \in \mathbb {E}, \end{align}
(49b)
\begin{align} &{A_{\sigma _{\upsilon }}}{Q_\sigma } + \widehat{B}{\breve{E}_{\sigma _{\upsilon }}} + {(\! {{A_{\sigma _{\upsilon }}}{Q_\sigma } \!+\! \widehat{B}{\breve{E}_{\sigma _{\upsilon }}}})^\top } \lt 0, \forall \sigma \in {\mathbb {E}{\setminus } \mathbb {L}}, \upsilon \in \lbrace 1,2\rbrace , \end{align}
(49c)
\begin{align} &{A_{\sigma }}{Q_{\sigma }} + \breve{B}_{\sigma }{\breve{E}_{\sigma }} + {({{A_{\sigma }}{Q_{\sigma }} + \breve{B}_{\sigma }{\breve{E}_{\sigma }}})^\top } \lt 0, \forall \sigma \in \mathbb {L}, \end{align}
(49d)
based on which we obtain
\begin{align} F_{\sigma _{\upsilon }} = {\breve{E}_{\sigma _{\upsilon }}}\bar{P}_{\sigma }, \sigma \in {\mathbb {E}{\setminus }\mathbb {L}}, \upsilon \in \lbrace 1,2\rbrace ; F_{\sigma } = {\breve{E}_{\sigma }}\bar{P}_{\sigma }, \sigma \in \mathbb {L}; \bar{P}_{\sigma }^{ - 1} = Q_{\sigma },\sigma \in \mathbb {E}. \end{align}
(50)
One potential benefit of switching control is extending the safety envelope to \(\Theta\) \(=\) \(\bigcup \nolimits _{\sigma \in \mathbb {E}}{{\Theta _{\sigma }}}\) [37]. However, the switching time and the impulsive effect induced by velocity reference switching are the critical factors in the safety envelope extension. In other words, if the dwell times of subsystems do not take the impulsive effect into account, then the safety envelope cannot be extended, which is illustrated by Figure 4:
Fig. 4.
Fig. 4. Impulsive effect on safety envelope: (a) System states escape from safety envelope and (b) system states stay in safety envelope.
Figure 4(a): At switching time \(t_{k+1}\), the system state \(\bar{\mathrm{e}}(t_{k+1})\) does not fall into \(\Theta _{2}\). Consequently, \(\bar{\mathrm{e}}(t) \notin \Theta\) for some time, which hinders the safety envelope extension.
Figure 4(b): At switching time \(t_{k+1}\), the system state \(\bar{\mathrm{e}}(t_{k+1})\) falls into the safety envelope \(\Theta _{2}\), which leads to \(\Theta _{2} \subseteq \Theta\), thus extends the safety envelope.
The impulsive effect imposes a higher requirement on the dwell times of switching controllers for the safety envelope extension. For the sake of simplifying presentation of investigation, we define the following:
\begin{align} &\bar{A}_{\sigma _{\upsilon }} = {{A_{\sigma _{\upsilon }}} + B{F_{\sigma _{\upsilon }}}}, \bar{A}_{\sigma } = {{A_{\sigma }} + \breve{B}_{\sigma }{F_{\sigma }}}, \end{align}
(51)
\begin{align} &\lambda _{\max }^\sigma = {\left\lbrace \begin{array}{ll}\mathop {\max }\limits _{\upsilon \in \left\lbrace {1,2} \right\rbrace } \lbrace {\lambda _{\max }}({\bar{P}_\sigma }{{\bar{A}}_{\sigma _{\upsilon }}} + \bar{A}_{\sigma _{\upsilon }}^\top {\bar{P}_\sigma })\rbrace , &\text{if}~\sigma \in {\mathbb {E}{\setminus }\mathbb {L}}\\ {\lambda _{\max }}({\bar{P}_{\sigma } }{{\bar{A}}_{\sigma }} + \bar{A}_{\sigma }^ \top {\bar{P}_{\sigma } }), &\text{if}~\sigma \in \mathbb {L} \end{array}\right.}. \end{align}
(52)
With the definitions at hand, the following theorem formally presents safe switching control.
Theorem 5.1.
Consider the impulsive switched system (47) and safety envelopes (28), with \(F_{\sigma _{\upsilon }}\), \(F_{\sigma }\) and \(\bar{P}_{\sigma }^{ - 1}\) computed via Equations (49) and (50). If the minimum dwell time defined in Equation (1) satisfies
\begin{align} \mathrm{dwell}_{\min } \gt \mathop {\max }\limits _{k \in \mathbb {N}} \left\lbrace {\frac{{{\lambda _{\max }}({{\bar{P}_{\sigma ({t_{k - 1}})}}})}}{{\lambda _{\max }^{\sigma ({t_{k - 1}})}}}\ln \frac{{{\lambda _{\min }}({{\bar{P}_{\sigma ({t_{k - 1}})}}})}}{{{\lambda _{\max }}({E_k^\top {\bar{P}_{\sigma ({t_k})}}{E_k}})}}} \right\rbrace , \end{align}
(53)
then the system (47) is stable, and \(\bar{\mathrm{e}}(t) \in \Theta\) for any \(t \ge t_{0}\) if \(\bar{\mathrm{e}}(t_{0}) \in \Theta _{\sigma (t_{0})}\).
Proof.
In light of Equation (50), the formula (49) equivalently transforms to
\begin{align} &{\bar{P}_\sigma } \gt 0, \forall \sigma \in \mathbb {E}, \end{align}
(54a)
\begin{align} &\widehat{c}^\top _\sigma {\bar{P}^{-1}_\sigma }{\widehat{c}_\sigma } \le 1, \forall \sigma \in \mathbb {E}, \end{align}
(54b)
\begin{align} &{\bar{P}_\sigma }{\bar{A}_{\sigma _{\upsilon }}} + \bar{A}_{\sigma _{\upsilon }}^\top {\bar{P}_\sigma } \lt 0, \forall \sigma \in {\mathbb {E}{\setminus }\mathbb {L}}, \upsilon \in \lbrace 1,2\rbrace , \end{align}
(54c)
\begin{align} &{\bar{P}_{\sigma }}{\bar{A}_{\sigma }} + \bar{A}_{\sigma }^\top {\bar{P}_{\sigma }} \lt 0, \sigma \in \mathbb {L}, \end{align}
(54d)
where \(\bar{A}_{\sigma _{\upsilon }}\) and \({\bar{A}_{\sigma }}\) are given in Equation (51). We now construct a function:
\begin{align} {V_{\sigma ({{\bar{t}_k}})}}({\bar{\mathrm{e}}(t)}) = {\bar{\mathrm{e}}^\top }(t){\bar{P}_{\sigma ({{\bar{t}_k}})}}\bar{\mathrm{e}}(t), t \in [ {{\bar{t}_k},{\bar{t}_{k + 1}}}), \end{align}
(55)
where \(\bar{t}_{k}\) denotes a switching time. The time derivative of \({V_{\sigma ({{\bar{t}_k}})}}({\bar{\mathrm{e}}(t)})\) satisfies
\begin{align} {\dot{V}_{{\sigma } ({\bar{t}_k})}}(\bar{\mathrm{e}}(t)) &\le \lambda _{\max }^{{\sigma }({\bar{t}_k})}{\bar{\mathrm{e}}^\top }(t)e(t), \end{align}
(56)
\begin{align} &= \lambda _{\max }^{{\sigma } ({\bar{t}_k})}\lambda _{\max }^{ - 1}({{\bar{P}_{{\sigma }({\bar{t}_k})}}}){\lambda _{\max }}({{\bar{P}_{{\sigma } ({\bar{t}_k})}}}){\bar{\mathrm{e}}^\top }(t)\bar{\mathrm{e}}(t) \nonumber \nonumber\\ & \le \lambda _{\max }^{{\sigma }({\bar{t}_k})}\lambda _{\max }^{ - 1}({{\bar{P}_{{\sigma }({\bar{t}_k})}}}){V_{{\sigma }({\bar{t}_k})}}(\bar{\mathrm{e}}(t)) \lt 0, \end{align}
(57)
where Equation (56) is obtained via considering Equation (52), the Equation (57) from its previous step is obtained via considering Equation (55), (54c), and (54d).
It follows from Equations (55), (47), and (57) that
\begin{align} {V_{{\sigma } ({\bar{t}_k})}}(\bar{\mathrm{e}}({t_k})) &= {\bar{\mathrm{e}}^{\top }}(\bar{t}_k^ -)({E^{\top }_k{\bar{P}_{{\sigma } ({\bar{t}_k})}}{E_k}})\bar{\mathrm{e}}(\bar{t}_k^ -)\nonumber \nonumber\\ & \le {\lambda _{\max }}({E_k^{\top }{\bar{P}_{{\sigma } ({\bar{t}_k})}}{E_k}}){\bar{\mathrm{e}}^{\top }}(\bar{t}_k^ -)\bar{\mathrm{e}}(\bar{t}_k^ -)\nonumber \nonumber\\ & \le \frac{{{\lambda _{\max }}({E_k^{\top }{\bar{P}_{{\sigma } ({\bar{t}_k})}}{E_k}})}}{{{\lambda _{\min }}({{\bar{P}_{{\sigma } ({\bar{t}_{k - 1}})}}})}}{V_{{\sigma } ({\bar{t}_{k - 1}})}}(\bar{\mathrm{e}}(\bar{t}_k^ -))\nonumber \nonumber\\ & \le \frac{{{\lambda _{\max }}({E_k^{\top }{\bar{P}_{{\sigma } ({\bar{t}_k})}}{E_k}})}}{{{\lambda _{\min }}({{\bar{P}_{{\sigma } ({\bar{t}_{k - 1}})}}})}}{e^{\frac{{\lambda _{\max }^{{\sigma } ({\bar{t}_{k - 1}})}({{\bar{t}_k} - {\bar{t}_{k - 1}}})}}{{{\lambda _{\max }}({{\bar{P}_{{\sigma } ({\bar{t}_{k - 1}})}}})}}}}{V_{{\sigma }({\bar{t}_{k - 1}})}}(\bar{\mathrm{e}}({\bar{t}_{k - 1}})), \end{align}
(58)
\begin{align} & \le \breve{\nu }_{k}{V_{{\sigma }({\bar{t}_{k - 1}})}}\left(\bar{\mathrm{e}}({\bar{t}_{k - 1}})\right), \end{align}
(59)
where \({\breve{\nu }_k} = \frac{{{\lambda _{\max }}({E_k^\top {\bar{P}_{\sigma ({\bar{t}_k})}}{E_k}})}}{{{\lambda _{\min }}({{\bar{P}_{\sigma ({\bar{t}_{k - 1}})}}})}}{e^{\frac{{\lambda _{\max }^{\sigma ({\bar{t}_{k - 1}})}\mathrm{dwell}_{\min }}}{{{\lambda _{\max }}({{\bar{P}_{\sigma ({\bar{t}_{k - 1}})}}})}}}}\). We note that the inequality (58) from its previous step is obtained via considering the integration of Equation (57), while Equation (59) from Equation (58) is obtained via considering \(\lambda _{\max }^{^{\sigma ({{\bar{t}}_{k - 1}})}} \lt 0\) implied by Equations (54c) and (54d). The condition (53) implies \(0 \lt \breve{\nu } = \mathop {\min }\limits _{k \in \mathbb {N}} \left\lbrace {{\breve{\nu }_k}} \right\rbrace \lt 1\) for \(\forall k \in \mathbb {N}\). We thus have
\begin{align} {V_{{\sigma } ({\bar{t}_k})}}(e({\bar{t}_k})) \lt \nu {V_{{\sigma } ({\bar{t}_{k - 1}})}}(e({\bar{t}_{k - 1}})), \end{align}
(60)
by which we construct a strictly decreasing sequence with respect to \(k\): \(\lbrace {{V_{{\sigma } ({\bar{t}_k})}}(\bar{\mathrm{e}}({t_k})),k \in \mathbb {N}} \rbrace\). The decreasing sequence straightforwardly implies that the switched system (47) is asymptotically stable.
We note that Equation (57) implies that \({V_{{\sigma } ({\bar{t}_k})}}(\bar{\mathrm{e}}({\bar{t}_k})) \lt {V_{{\sigma } ({\bar{t}_k})}}(\bar{\mathrm{e}}(t))\) for any \(t \gt \bar{t}_k\) in \([ {{\bar{t}_k},{\bar{t}_{k + 1}}})\), which, in conjunction with Equation (60), implies that \({{V}_{\sigma ({{\bar{t}_k}})}}(\bar{\mathrm{e}}(t)) \lt {{V}_{\sigma ({{\bar{t}_0}})}}(\bar{\mathrm{e}}(t_{0}))\) for any \(t \gt t_{0}\) and \(\forall k \in \mathbb {N}\). Therefore, if \(\bar{\mathrm{e}}(t_{0}) \in \Theta _{\sigma (t_{0})}\), then we have \(\bar{\mathrm{e}}(t) \in \Theta _{\sigma (\bar{t}_{0})}\) for any \(t \ge t_{0}\). As a consequence, \(x(t) \in \Theta = \bigcup \nolimits _{\sigma \in \mathbb {E}} {{\Theta _{\sigma }}}\) for any \(t \ge t_{0}\).□
Remark 8.
It is known that unreasonable switching between (even) stable models in hybrid systems may lead to instability [26]. One of the contributions of Theorem 5.1 is to guarantee that under the proposed switching rules, S\(\mathcal {L}_{1}\)-Simplex will be stable.

5.2 Assumption

We now present a general system than can describe both the vehicle dynamics in the normal environments (34) and the vehicle dynamics in the unforeseen environments (33),
\begin{align} \dot{x}(t) &= {A_{\widetilde{\sigma }(t)}}x(t) + B_{\sigma (t)}u(t) + g(t), \end{align}
(61)
where \(\widetilde{\sigma }(t)\) is given in Equation (32); \(B_{{\sigma }(t)}\) is given in Equation (48); \(g(t) = f_{0}(x, t)\) if \({\sigma }(t) \in {\mathbb {E}{\setminus }\mathbb {L}}\), and \(g(t) = f_{1}(x, t)\) otherwise.
Considering Equation (61), the dynamics of faulty vehicle system can be described by
\begin{align} \dot{x}(t) &= {A_{\widetilde{\sigma }(t)}}x(t) + B_{\sigma (t)}u(t) + f_{2}(x, t), \end{align}
(62)
where \(f_{2}(x, t)\) is an uncertainty function that represents modeling errors, noise, disturbance, unmodeled forces/torques, and so on. The fault dynamics (62) indicates that this article focuses on the class of software and physical failures, whose influences can be modeled by \(f_{2}(x,t)\).
Building on the safe switching control studied in the previous subsection, the remaining S\(\mathcal {L}_{1}\)-Simplex design relies on the following assumption on the uncertainty.
Assumption 3.
The uncertainties \(f_{q}(x,t)\) in Equations (33), (34), and (62) are uniformly bounded in time and Lipschitz in \(x\) over safety set, i.e., there exist positive \(l_{q}\) and \(b_{q}\) such that
\begin{align} \left\Vert {f_{q}({0,t})} \right\Vert \le b_{q}~\text{and}~\left\Vert {f_{q}({{x_1},t}) - f_{q}({{x_2},t})} \right\Vert \le l_{q}\left\Vert {{x_1} - {x_2}} \right\Vert ,~~q = 0,1,2 \end{align}
(63)
hold for any \(t \ge 0\), and \(x_{1} - [\mathbf {w}^\mathrm{r}_{\sigma }, \mathbf {v}^\mathrm{r}_{\sigma }]^{\top }\), \(x_{2} - [\mathbf {w}^\mathrm{r}_{\sigma }, \mathbf {v}^\mathrm{r}_{\sigma }]^{\top }\) \(\in \bigcup \nolimits _{\sigma \in \mathbb {E}} {{\Omega _{\sigma }}}\), with \(\Omega _{\sigma }\) given in Reference (24).
We next present other backbones of S\(\mathcal {L}_{1}\)-Simplex in achieving Safe Objective 1 and Safe Objective 2 simultaneously.

5.3 Uncertainty Monitor

As shown in Figure 1, the decision logic needs the measurement of uncertainty from the monitor to make the decision of switching between HPC and M\(\mathcal {L}_{1}\)HAC. The dynamics of uncertainty monitor of the real car under the control actuator from HPC is described by
\begin{align} &\dot{z}(t) = {A_z}z(t) + ({A_z}{\widetilde{B}_z} - {\widetilde{B}_z}{A_{\mathrm{hpc}}})x(t) - {\widetilde{B}_z}Bu(t), \end{align}
(64a)
\begin{align} &{\widehat{f}}(x, t) = {C_z}z(t) + {C_z}{\widetilde{B}_z}x(t), \end{align}
(64b)
\begin{align} &z({{t_k}}) = - {\widetilde{B}_z}x({{t_k}}), \end{align}
(64c)
where \({\widehat{f}}(x, t) \in \mathbb {R}^{2}\) is a measurement of the uncertainty, and the triple \((A_{z}, \widetilde{B}_{z}, C_{z})\) constitutes a low-pass filter [37].

5.4 Switching Rules

Building on the safety envelope (28) and the uncertainty monitor (64), the switching rules, including the decision logic for HPC and M\(\mathcal {L}_{1}\)HAC and the switching logic for off-line-built and learned models, are described below.
Decision Logic: switching from HPC to M\(\mathcal {L}_{1}\)HAC
Rule I: triggered by the magnitude of uncertainty measurement:
\begin{align} &\left\Vert {{\widehat{f}}(x,t)} \right\Vert \gt \int _{0}^t {\left\Vert {{C_z}{e^{{A_z}({t - \tau })}}{B_z}} \right\Vert \left({l_{0}\left\Vert {x(\tau)} \right\Vert + {b_{0}}}\right)d\tau }. \end{align}
Rule II: triggered by the safety envelope (28):
\begin{align} {e^\top }(t){P_\sigma }e(t) = \theta ~\text{and}~ {e^\top }(t){P_\sigma }\dot{e}(t) \gt 0, \forall \sigma \in \mathbb {E}. \end{align}
Switching Logic: switching from off-line-built models to on-line learned model:
Rule III: triggered by the uncertainty measurement (65) and environmental perception: \({\sigma }(t) \notin {\mathbb {E}{\setminus }\mathbb {L}}\).
Rule IV: triggered by the safety envelope verification (66) and environmental perception: \({\sigma }(t) \notin {\mathbb {E}{\setminus }\mathbb {L}}\).
Remark 9.
It has been proved in Reference [37] that under Assumption 3, i.e., the normal condition, the triggering condition in \({\bf Rule}\) \({\bf I}\) does not hold, which means that M\(\mathcal {L}_{1}\)HAC is not activated.

5.5 ℒ1 Adaptive Controller

The components of \(\mathcal {L}_{1}\) adaptive controller in Figure 2 are described below.

5.5.1 State Predictor.

The state predictor of M\(\mathcal {L}_{1}\)HAC in Figure 2 is described by
\begin{align} \dot{\tilde{x}}(t) = {A_{\widetilde{\sigma }(t)}}x(t) + B_{\sigma (t)}u(t) + {\tilde{\mathrm{f}}}(t) - \alpha ({\tilde{x}(t) - x(t)}), \tilde{x}(t_{k^{*}}) = x(t_{k^{*}}), \end{align}
(67)
where \(t_{k^{*}}\) is the switching moment from HPC to M\(\mathcal {L}_{1}\)HAC, \(\alpha\) is an arbitrary positive scalar, and \({\tilde{\mathrm{f}}}(t)\) is the estimation of the uncertainties \({f_0}({x,t})\), \({f_1}({x,t})\), and \({f_2}({x,t})\), which is computed by the following adaptation law.

5.5.2 Adaptation Law.

The estimated \({\tilde{\mathrm{f}}}(t)\) in Equation (67) is computed via
\begin{align} {\dot{\tilde{\mathrm{f}}}}(t) = K{\text{Proj}_{{\Psi }}}({{{\tilde{\mathrm{f}}}}(t), -({\tilde{x}(t) - x(t)})}), \end{align}
(68)
where \(K\) is the adaptive gain, and
\begin{align} \!{\Psi } = \left\lbrace {f \in {\mathbb {R}^2}\left| {\left\Vert f \right\Vert \le \rho = \frac{l}{{\sqrt {\mathop {\min }\limits _{\sigma \in \mathbb {E}} \left\lbrace {{\lambda _{\min }}({P_\sigma })} \right\rbrace } }} + b} \right.} \right\rbrace , \end{align}
(69)
with
\begin{align} l = \max \left\lbrace {{l_0},{l_1},{l_2}} \right\rbrace ,b = \max \left\lbrace {{b_0},{b_1},{b_2}} \right\rbrace . \end{align}
(70)
The projection operator \({\text{Proj}_{{\Psi }}}: \mathbb {R}^{2} \times \mathbb {R}^{2} \rightarrow \mathbb {R}^{2}\) in Equation (68) is defined as
\begin{align} {\text{Pro}}{{\rm {j}}_{{\Psi }}}({p,q}) = \left\lbrace \begin{array}{l} \!\!q - \frac{{\nabla g(p)(\nabla {g}(p))^\top q g(p)}}{{\left\Vert {\nabla g(p)} \right\Vert ^{2}}},\text{if}~g(p) \gt 0~\text{and}~ q^{\top }\nabla g(p) \gt 0\\ \!\!q, \text{otherwise} \end{array} \right., \end{align}
(71)
where \(g(p) = \frac{{{p^\top }p - {\rho ^2} + 1 - \vartheta }}{{1 - \vartheta }}\) with \(\vartheta \in ({1 - {\rho ^2},1})\). It has been proved in Reference [37] that the operator (71) can always guarantee \(\tilde{\mathrm{f}}(t) \in {\Psi }\).

5.5.3 Low-Pass Filter.

The low-pass filter that takes \({\tilde{\mathrm{f}}}(t)\) as control input is described by
\begin{align} \dot{\breve{x}}(t) = \breve{A}\breve{x}(t) + \breve{B}{{\tilde{\mathrm{f}}}}(t), {u^{\text{ad}}}(t) = \breve{C}\breve{x}(t),~~~~~\breve{x}(t^{*}_{k}) = \mathbf {0}, \end{align}
(72)
where \(t^{*}_{k}\) denotes the switching time when M\(\mathcal {L}_{1}\)HAC is activated, the triple \((\breve{A}, \breve{B}, \breve{C})\) is the state space realization of an \(1 \times 2\) matrix of low-pass filters that are stable and strictly proper with the transfer function:
\begin{align} {T}(s) = \breve{C}{({s\mathbf {I} - \breve{A}})^{ - 1}}\breve{B}. \end{align}
(73)
Finally, as shown in Figure 2, the control input from M\(\mathcal {L}_{1}\)HAC for the real car in dynamic and/or unforeseen environments (Equation (34)) is
\begin{align} u(t) = \bar{\mathrm{u}}(t)- {u^{\text{ad}}}(t), \end{align}
(74)
where \({u^{\text{ad}}}(t)\) and \(\bar{\mathrm{u}}(t)\) are computed by Equations (72) and (45), respectively.

5.6 Mℒ1HAC Performance

Finally, we present the performance analysis of M\(\mathcal {L}_{1}\)HAC. Before proceeding on, we define
\begin{align} &{H_{\widetilde{\sigma }(t_{k})}}(s) = {\left({s\mathbf {I} - {A_{\widetilde{\sigma }(t_{k})}} - B_{\sigma }{F_{\widetilde{\sigma }(t_{k})}}}\right)^{ - 1}}, \end{align}
(75)
\begin{align} &\delta = {\frac{{{\bar{\mathrm{e}}^ \top }({t_{{k^*}}}){P_{\sigma ({t_{{k^*}}})}}\bar{\mathrm{e}}({t_{{k^*}}})}}{{{\lambda _{\min }}({P_{\sigma ({t_k})}})}}}, \end{align}
(76)
\begin{align} &{\varepsilon } = \frac{1}{{(1 - {\chi _{\widetilde{\sigma } ({t_k})}})}}\left({{{\left\Vert {{H_{\widetilde{\sigma } ({t_k})}}(s)BT(s)(s + \alpha)} \right\Vert }_{{\mathcal {L}_1}}}\sqrt {\frac{\mu }{K}} } \right. \left. { + ({\delta + \left\Vert {x_{\sigma ({t_k})}^*} \right\Vert + \frac{b}{l}}){\chi _{\widetilde{\sigma }({t_k})}}} \right), \end{align}
(77)
\begin{align} &{\chi _{\widetilde{\sigma }(t_{k})}} = {\left\Vert {{H_{\widetilde{\sigma }(t_{k})}}(s)({I - BT(s)})} \right\Vert _{{\mathcal {L}_1}}}{l}, \end{align}
(78)
\begin{align} &{x_{\sigma (t)}^*} = [\mathbf {w}^\mathrm{r}_{\sigma (t)},~\mathbf {v}^\mathrm{r}_{\sigma (t)}]^\top . \end{align}
(79)
With these definitions at hand, the performance of M\(\mathcal {L}_{1}\)HAC is formally presented in the following theorem.
Theorem 5.2.
Consider the real vehicle dynamics (33) with control input (74) from M\(\mathcal {L}_{1}\)HAC after \(t = t_{k^{*}}\). If the minimum dwell time satisfies Equation (53), \(e(t_{k^{*}}) \in \Theta _{\sigma (t_{k^{*}})}\), \(\varepsilon \gt 0\) and \({\varepsilon } + \delta \le \frac{1}{{\sqrt {{\lambda _{\max }}({{P_{\sigma ({{t_k}})}}})} }}\), then \(e(t) \in \Phi _{\sigma ({{t_k}})}\) and \(\left\Vert {x(t) - \bar{\mathrm{x}}(t)} \right\Vert \le \varepsilon\), for any \(t \in [t_{k},t_{k+1})\), \(k \ge k^* \in \mathbb {N}\).
Proof.
The proof is similar to the proof path of Theorem 4.10 of Reference [37]. We thus only present the critical differences.
It follows from Equations (61), (32), and (40), with the consideration (Equations (45) and (74)), that
\[\begin{eqnarray*} {\dot{x}(s) - \dot{\bar{\mathrm{x}}}(s)} = \bar{A}_{\widetilde{\sigma }} ({{x}(s) - {\bar{\mathrm{x}}}(s)}) - B_{{\sigma }}{u^{\text{ad}}}(t) + g(t), \nonumber \nonumber \end{eqnarray*}\]
where \(\bar{A}_{\widetilde{\sigma }}\) is given in Equation (51). We then have
\begin{align} {\left\Vert {x(s) - \bar{\mathrm{x}}(s)} \right\Vert _{{\mathcal {L}_\infty }\left[ {{t_k},{t_{k + 1}}} \right)}} \le {\left\Vert {{H_{\widetilde{\sigma }(t_k)}}(s)BT(s)(s + \alpha)} \right\Vert _{{\mathcal {L}_1}}}\sqrt {\frac{\mu }{K}} + {{\rm M}_{\widetilde{\sigma }({{t_k}})}}, \end{align}
(80)
where \(H_{\widetilde{\sigma }(t_k)}\) is given in Equation (75), \(T(s)\) is given in Equation (73), and
\begin{align} \mu &= 4{\rho ^2} + \frac{{4\alpha {\rho ^2} + 2\rho l}}{\alpha }\left({\frac{1}{{1 - {e^{ - 2\alpha {\rm {dwell}}{{\rm {l}}_{\min }}}}}} + 1}\right), \end{align}
(81)
\begin{align} {{\rm {M}}_{\widetilde{\sigma }(t_k)}} &= {\left\Vert {{H_{\widetilde{\sigma }(t_k)}}(s)({\mathbf {I} - BT(s)})} \right\Vert _{{\mathcal {L}_1}}}{\left\Vert \breve{f}(s) \right\Vert _{{\mathcal {L}_\infty }\left[ {{t_k},{t_{k + 1}}} \right)}}. \end{align}
(82)
Following Equations (57) and (60), we have
\begin{align} {\bar{\mathrm{e}}^\top }(t){P_{\sigma ({{\bar{t}_k}})}}\bar{\mathrm{e}}(t) = {V_{\sigma ({{\bar{t}_k}})}}({\bar{\mathrm{e}}(t)}) \lt {V_{\sigma ({{t_{{k^*}}}})}}({\bar{\mathrm{e}}({{t_{{k^*}}}})}) = \bar{\mathrm{e}}^\top ({{t_{{k^*}}}}){P_{\sigma ({{t_{{k^*}}}})}}\bar{\mathrm{e}}({{t_{{k^*}}}}), \end{align}
(83)
for any \(t \in [\bar{t}_{k}, \bar{t}_{k+1})\) with \(\bar{t}_{k} \ge t_{k^{*}}\), \(\forall k \in \mathbb {N}\). With the consideration of \(\delta\) given by Equation (76), the inequality (83) implies that
\begin{align} {\Vert \bar{\mathrm{e}} \Vert _{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} \lt \delta ,~~~k \ge k^{*} \in \mathbb {N}. \end{align}
(84)
It follows from Equation (63) that
\begin{align} {\Vert \breve{f}_{q} \Vert _{{\mathcal {L}_\infty }\left[ {{t_k},{t_{k + 1}}} \right)}} \le {l}{\left\Vert {x} \right\Vert _{{\mathcal {L}_\infty }\left[ {{t_k},{t_{k + 1}}} \right)}} + {b},~q = 0, 1, 2, \end{align}
(85)
where \(l\) and \(b\) are given in Equation (70). Combining Equation (82) with Equations (84) and (85) yields
\begin{align} {{\rm {M}}_{\widetilde{\sigma }(t_k)}} &\le {\left\Vert {{H_{\widetilde{\sigma }({t_k})}}(s)(\mathbf {I} - BT(s))} \right\Vert _{{\mathcal {L}_1}}}\left({l{{\left\Vert {x} \right\Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + b} \right) \nonumber \nonumber\\ & = {\chi _{\widetilde{\sigma }({t_k})}}\left({{{\left\Vert {x} \right\Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + \frac{b}{l}} \right) \nonumber \nonumber\\ &\le {\chi _{\widetilde{\sigma }({t_k})}}\left({{{\left\Vert {x - \bar{\mathrm{x}}} \right\Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + {{\left\Vert { \bar{\mathrm{x}}\left(t \right)} \right\Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + \frac{b}{l}} \right) \nonumber \nonumber\\ & \le {\chi _{\widetilde{\sigma }({t_k})}} \left({{{\Vert {x - \bar{\mathrm{x}}} \Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + {{\Vert \bar{\mathrm{e}} \Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + \Vert {x_{\widetilde{\sigma }({t_k})}^*} \Vert + \frac{b}{l}}\right) \nonumber \nonumber\\ & \lt {\chi _{\widetilde{\sigma }({t_k})}} \left({{{\left\Vert {x - \bar{\mathrm{x}}} \right\Vert }_{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} + \delta + \Vert {x_{\sigma ({t_k})}^*} \Vert + \frac{b}{l}}\right), \end{align}
(86)
where \({\chi _{\widetilde{\sigma }({t_k})}}\) is given by Equation (79). Substituting Equation (86) into Equation (80) yields
\[\begin{eqnarray*} \left({1 - {\chi _{\widetilde{\sigma }({t_k})}}} \right){\left\Vert {x - \bar{\mathrm{x}}} \right\Vert _{{\mathcal {L}_\infty }[{t_k},{t_{k + 1}})}} \lt {\left\Vert {{H_{\widetilde{\sigma }({t_k})}}\left(s \right)BT\left(s \right)\left({s + \alpha } \right)} \right\Vert _{{\mathcal {L}_1}}}\sqrt {\frac{\mu }{K}} + {\chi _{\widetilde{\sigma } ({t_k})}}\left({\delta + \Vert {x_{\widetilde{\sigma }({t_k})}^*} \Vert + \frac{b}{l}} \right),\nonumber \nonumber \end{eqnarray*}\]
which, in conjunction with \(\varepsilon \gt 0\) (given in Equation (77)), results in \(\left\Vert {x\left(t \right) - \bar{\mathrm{x}}\left(t \right)} \right\Vert \le \varepsilon\).□

6 Experiments

This section focuses on the demonstration of M\(\mathcal {L}_{1}\)HAC for safe velocity regulation. The experiments are performed in the AutoRally platform [16], which is a high-performance testbed for self-driving vehicle research. The open source codes of the revised AutoRally platform for the safe velocity regulation in the dynamic and unforeseen environmwents are available in Reference [1].

6.1 AutoRally Knowledge

6.1.1 Actuators.

The throttle, steering and brakes are the control variables of AutoRally. The valid actuator command values in the steering are between \([-1,1]\). The steering values of \(-1\), 1 and 0 will turn the steering all the way left, all the way right and make any calibrated AutoRally platform drive in a straight line, respectively.
The valid actuator command values in the throttle and front brake are between \([-1,1]\). A throttle value of \(-1\) is full (rear) brake and 1 is full throttle. The front brake value ranges from 0 for no brake to 1 for full front brake while negative values are undefined.

6.1.2 Vehicle Model Parameters.

All simulation vehicle parameters (including total mass, front wheel mass, rear wheel mass, overall length, overall width, overall height, wheelbase, rear axle to CG (x offset), rear axle to CG (z offset), front track, rear track and wheel diameter, and sensor placement and characteristics) are set according to their experimentally determined values from a physical 1:5 scale (HPI Baja 5SC) RC trophy truck [16].
The vehicle’s parameters, including wheel rotational inertia, friction torque on wheel, aerodynamic drag constant, viscous friction in driven wheel, gravity center height, brake piston effective area, pad friction coefficient and brake disc effective radii, are unknown [16].

6.1.3 Vehicle Setting.

In the experiments, the vehicle’s actuator command value of steering is fixed as 0, i.e., the vehicle is driving straightforwardly. The front brake is disused. The sensor sampling frequency of the angular and longitudinal velocities are set to 100 Hz. The driving areas are flat.

6.2 ℒ1 Adaptive Controller vs. Normal Controller

Due to the unknown parameters of AutoRally listed in the Subsection 6.1.2, the off-line built models described by Equations (29) and (30) are not available. Alternatively, we use the learned model to demonstrate the advantage of \(\mathcal {L}_{1}\) adaptive controller. As shown in Figure 5, the vehicle is driving in a flat grass area. With the computed variances, according to Theorem 4.1, the sensor data in 3-second time intervals can guarantee the prescribed levels of accuracy \(\phi = 0.8\) and confidence \(1 - \delta = 0.8\) of the learned model: \({A_{\text{learned}}} = \left[ {\begin{matrix} {{\text{0}}{\text{.2753}}}&{ - {\text{0}}{\text{.4740}}} \\ {{\text{1}}{\text{.1742}}}&{ - {\text{1}}{\text{.3313}}} \end{matrix}} \right]\) & \({B_{\text{learned}}} = \left[ {\begin{matrix} {{\text{0}}{\text{.7}}}&{0.7} \\ {\text{0}}&0 \end{matrix}} \right]\).
Fig. 5.
Fig. 5. Driving environment.
We set the slip safety boundaries as \(\mu _{\mathrm{grass}} = 5\) m/second. For the state bias, we let \(\varepsilon = 2\). With the knowledge of wheel radius \(r = 0.0975\) m, we set the references of angular and longitudinal velocities, respectively, as \(\mathbf {w}^{r} = 153.8462\) rad/second and \(\mathbf {v}^{r} = 15\) m/second, and the safety boundaries of slip is set to \(\mu _{\mathrm{grass}}\) = 5 m/seconds. We let the minimum dwell time be \(\mathrm{dwell}_{\min } =\) 0.5 seconds. The controller matrix is solved by LMI toolbox as \(F_{\text{learned}} = \left[ {\begin{matrix} {{\text{109}}{\text{.8254}}}&{ - {\text{32}}{\text{.5282}}} \\ {{\text{109}}{\text{.3218}}}&{ - {\text{33}}{\text{.5638}}} \end{matrix}} \right]\). For \(\mathcal {L}_{1}\) adaptive controller, we set the adaptive law parameters as \(K = 10\), \(\rho = 1\) and \(\upsilon = 0.5\). We set the low-pass filter matrices as \(\breve{A} = \breve{B} = \breve{C} = \left[ {\begin{matrix} 1&1 \\ 0&1 \end{matrix}} \right]\). The state predictor gain parameter is set to \(\alpha = 5\).
The trajectories of angular velocities, longitudinal velocities and slip are respectively shown in Figures 68, from which we observe that
Fig. 6.
Fig. 6. Wheel angular velocities.
Fig. 7.
Fig. 7. Longitudinal velocities.
Fig. 8.
Fig. 8. Wheel slips.
\(\mathcal {L}_{1}\) adaptive controller succeeds in achieving safe velocity regulation, i.e., the vehicle’s angular and longitudinal velocities successfully track their references and the four wheel slips are always below the safety boundary;
using the normal controller (i.e., only the normal control input (45)), the vehicle cannot achieve the safe velocity regulation and finally loses control.
The demonstration video is available in Reference [2].

6.3 Mℒ1HAC vs. ℒ1HAC

In the experiment, we demonstrate the safe velocity regulation in the dynamic and unforeseen environments via M\(\mathcal {L}_{1}\)HAC. As shown in Figure 9, the vehicle will drive from the dirt and grass areas to the snow area, and the snow area is the unforeseen environment that the vehicle never drove therein before and thus does not have the corresponding sensor data before entering into it.
Fig. 9.
Fig. 9. Dynamic and unforeseen driving environemnts.
As Assumption 1 states the environmental perception will accurately detect the unforeseen snow area in advance. To achieve the safe velocity regulation in the dynamic and unforeseen environments, the safe operation is organized as follows.
The safety boundaries of slip in the dirt, grass, and snow areas are set as \(\mu _{\mathrm{dirt}}\) = 3.7 m/second, \(\mu _{\mathrm{grass}}\) = 3.3 m/second, and \(\mu _{\mathrm{snow}}\) = 2.7 m/second, respectively. For the state bias, we let \(\varepsilon = 2\).
The velocity references of dirt area are set to \([\mathbf {w}^{r}_{\text{dirt}}, \mathbf {v}^{r}_{\text{dirt}}]\) = [123.6 rad/second, 12 m/second].
The velocity references of grass area are initially set to \([\mathbf {w}^{r}_{\text{grass}}, \mathbf {v}^{r}_{\text{grass}}]\) = [103 rad/second, 10 m/second].
The velocity references of grass area is set to \([\mathbf {w}^{r}_{\text{grass}}, \mathbf {v}^{r}_{\text{grass}}]\) = [10.3 rad/second, 1 m/second] 7 m ahead of the snow area.
Once the vehicle enters into the snow area, the sensor data in the first 2 seconds is used to learn the vehicle model, which will guarantee the prescribed levels of accuracy \(\phi = 0.9\) and confidence \(1 - \delta = 0.7\).
Once the learned model is available, M\(\mathcal {L}_{1}\)HAC immediate updates the vehicle model with \({A_{\text{learned}}} = \left[ {\begin{matrix} {{\text{0}}{\text{.2444}}}&{ - {\text{0}}{\text{.5151}}} \\ {{\text{17}}{\text{.0038}}}&{ - {\text{17}}{\text{.0038}}} \end{matrix}} \right]\) & \({B_{\text{learned}}} = \left[ {\begin{matrix} {{\text{0}}{\text{.1}}}&{0.1} \\ {\text{0}}&0 \end{matrix}} \right]\).
The controller matrix in the \(\mathcal {L}_{1}\) is updated with \({F_{\text{learned}}} = \left[ {\begin{matrix} {{\text{7}}{\text{.9654}}}&{ - {\text{24}}{\text{.5730}}} \\ {{\text{6}}{\text{.0458}}}&{ - {\text{20}}{\text{.5564}}} \end{matrix}} \right]\).
Based on the learned vehicle model, velocity references of snow area are immediately updated with \([\mathbf {w}^{r}_{\text{snow}}, \mathbf {v}^{r}_{\text{snow}}]\) = [40 rad/second, 3.9 m/second].
The trajectories of angular velocities, longitudinal velocities, and slip are, respectively, shown in Figures 1012, which demonstrate that
Fig. 10.
Fig. 10. Wheel angular velocities.
Fig. 11.
Fig. 11. Longitudinal velocities.
Fig. 12.
Fig. 12. Wheel slips.
The proposed M\(\mathcal {L}_{1}\)HAC succeeds in safe velocity regulation in the dynamic and unforeseen environments, i.e., the vehicle’s angular and longitudinal velocities successfully track the provided switching references, and the four wheel slips are always below the switching safety boundaries.
The \(\mathcal {L}_{1}\)HAC proposed in Reference [37], i.e., the \(\mathcal {L}_{1}\) controller without model learning, fails to maintain safe velocity regulation in the unforeseen snow environment, which is due to the large model mismatch.
The demonstration video is available in Reference [3].

7 Conclusion

In this article, we have proposed a novel Simplex architecture for safe velocity regulation of self-driving vehicles through the integration of TCS and ABS. To make the Simplex more reliable in the unprepared or unforeseen environments, finite-time model learning, in conjunction with safe switching control, is incorporated into \(\mathcal {L}_{1}\)-based verified safe control. The short-term sensor data of vehicle state from a single trajectory is used to adaptively update vehicle model for reliable control actuation computation. Experiments performed in the AutoRally platform demonstrate the effectiveness of the model-learning based \(\mathcal {L}_{1}\)-Simplex for longitudinal vehicle control systems.
Exploring the model-learning based \(\mathcal {L}_{1}\)-Simplex in coordinating lateral motion control and longitudinal motion control of self-driving vehicles, as well as the demonstrations in full-size car, constitute our future research directions.

References

[1]
[n. d.]. Open Source: Revised AutoRally. Retrieved from https://github.com/ymao578/GM.
[2]
[n. d.]. Demonstration Video: \(\mathcal {L}_1\) Adaptive Controller v.s. Normal Controller. Retrieved from https://ymao578.github.io/pubs/m2.mp4.
[3]
[n. d.]. Demonstration Video: M\(\mathcal {L}_{1}\)HAC v.s. \(\mathcal {L}_{1}\)HAC. Retrieved from https://ymao578.github.io/pubs/m1.mp4.
[4]
Kasey Ackerman, Enric Xargay, Ronald Choe, Naira Hovakimyan, M. Christopher Cotting, Robert B. Jeffrey, Margaret P. Blackstun, Timothy P. Fulkerson, Timothy R. Lau, and Shawn S. Stephens. 2016. \(\mathcal {L}_1\) stability augmentation system for Calspan’s variable-stability learjet. In Proceedings of the AIAA Guidance, Navigation, and Control Conference. 0631.
[5]
Ayman A. Aly, El-Shafei Zeidan, Ahmed Hamed, Farhan Salem, et al. 2011. An antilock-braking systems (ABS) control: A technical review. Intell. Contr. Autom. 2, 03 (2011), 186–195.
[6]
Alexander Amini, Wilko Schwarting, Guy Rosman, Brandon Araki, Sertac Karaman, and Daniela Rus. 2018. Variational autoencoder for end-to-end control of autonomous driving with novelty detection and training de-biasing. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 568–575.
[7]
Leah Asmelash. Everything You Need to Know About Anow Squalls. Retrieved December 30, 2021 from https://www.cnn.com/2019/12/19/weather/snow-squall-what-is-explain-trnd/index.html.
[8]
Francesco Borrelli, Alberto Bemporad, Michael Fodor, and Davor Hrovat. 2006. An MPC/hybrid system approach to traction control. IEEE Trans. Contr. Syst. Technol. 14, 3 (2006), 541–552.
[9]
Ronald Choe, Olaf Stroosma, Enric Xargay, Herman Damveld, Naira Hovakimyan, J. Mulder, and Herman Damveld. A handling qualities assessment of a business jet augmented with an \(\mathcal {L}_1\) adaptive controller. In Proceedings of the AIAA Guidance, Navigation, and Control Conference. 6610, 2011.
[10]
V. Colli, Giovanni Tomassi, and Maurizio Scarano. 2006. Single Wheel longitudinal traction control for electric vehicles. IEEE Trans. Power Electr. 21, 3 (2006), 799–808.
[11]
Stefano De Pinto, Christoforos Chatzikomis, Aldo Sorniotti, and Giacomo Mantriota. 2017. Comparison of traction controllers for electric vehicles with on-board drivetrains. IEEE Trans. Vehic. Technol. 66, 8 (2017), 6715–6727.
[12]
C. Canudas De Wit and Panagiotis Tsiotras. 1999. Dynamic tire friction models for vehicle traction control. In Proceedings of the 38th IEEE Conference on Decision and Control. 3746–3751.
[13]
Jullierme Emiliano Alves Dias, Guilherme Augusto Silva Pereira, and Reinaldo Martinez Palhares. 2014. Longitudinal model identification and velocity control of an autonomous car. IEEE Trans. Intell. Transport. Syst. 16, 2 (2014), 776–786.
[14]
George Dimitrakopoulos and Panagiotis Demestichas. 2010. Intelligent transportation systems. IEEE Vehic. Technol. Mag. 5, 1 (2010), 77–84.
[15]
Aditya Gahlawat, Pan Zhao, Andrew Patterson, Naira Hovakimyan, and Evangelos Theodorou. 2020. \(\mathcal {L}_1-GP\): \(\mathcal {L}_1\) adaptive control with Bayesian learning. In Learning for Dynamics and Control. PMLR, 826–837.
[16]
Brian Goldfain, Paul Drews, Changxi You, Matthew Barulic, Orlin Velev, Panagiotis Tsiotras, and James M. Rehg. 2019. AutoRally: An open platform for aggressive autonomous driving. IEEE Contr. Syst. Mag. 39, 1 (2019), 26–55.
[17]
Kyoungseok Han, Mooryong Choi, Byunghwan Lee, and Seibum B. Choi. 2017. Development of a traction control system using a special type of sliding mode controller for hybrid 4WD vehicles. IEEE Trans. Vehic. Technol. 67, 1 (2017), 264–274.
[18]
Kyoungseok Han, Seibum B. Choi, Jonghyup Lee, Dongyoon Hyun, and Jounghee Lee. 2017. Accurate brake torque estimation with adaptive uncertainty compensation using a brake force distribution characteristic. IEEE Trans. Vehic. Technol. 66, 12 (2017), 10830–10840.
[19]
Lukas Hewing, Juraj Kabzan, and Melanie N. Zeilinger. 2019. Cautious model predictive control using gaussian process regression. IEEE Trans. Contr. Syst. Technol. 28, 6 (2019), 2736–2743.
[20]
Naira Hovakimyan and Chengyu Cao. 2010. \(\mathcal {L}_1\) Adaptive Control Theory: Guaranteed Robustness with Fast Adaptation. SIAM.
[21]
Naira Hovakimyan, Chengyu Cao, Evgeny Kharisov, Enric Xargay, and Irene M. Gregory. 2011. \(\mathcal {L}_1\) adaptive control for safety-critical systems. IEEE Contr. Syst. Mag. (2011).
[22]
Valentin Ivanov, Dzmitry Savitski, and Barys Shyrokau. 2014. A survey of traction control and antilock braking systems of full electric vehicles with individually controlled electric motors. IEEE Trans. Vehic. Technol. 64, 9 (2014), 3878–3896.
[23]
P. Khatun, Christopher M. Bingham, Nigel Schofield, and P. H. Mellor. 2003. Application of fuzzy control algorithms for electric vehicle antilock braking/traction control systems. IEEE Trans. Vehic. Technol. 52, 5 (2003), 1356–1364.
[24]
William Kirchner and Steve C. Southward. 2011. An anthropomimetic approach to high performance traction control. J. Behav. Robot. 2, 1 (2011), 25–35.
[25]
Tyler Leman, Enric Xargay, Geir Dullerud, Naira Hovakimyan, and Thomas Wendel. 2009. \(\mathcal {L}_1\) adaptive control augmentation system for the X-48B aircraft. In Proceedings of the AIAA Guidance, Navigation, and Control Conference. 5619.
[26]
Daniel Liberzon. 2003. Switching in Systems and Control. Springer Science & Business Media.
[27]
Guillermo A. Magallan, Cristian H. De Angelo, and Guillermo O. Garcia. 2010. Maximization of the traction forces in a 2WD electric vehicle. IEEE Trans. Vehic. Technol. 60, 2 (2010), 369–380.
[28]
Yanbing Mao, Naira Hovakimyan, Petros Voulgaris, and Lui Sha. Finite-time model inference from a single noisy trajectory. arXiv:2010.06616. Retrieved from https://arxiv.org/abs/2010.06616.
[29]
Rajesh Rajamani. 2011. Vehicle Dynamics and Control. Springer Science & Business Media.
[30]
Elias Reichensdörfer, Dirk Odenthal, and Dirk Wollherr. 2020. On the stability of nonlinear wheel-slip zero dynamics in traction control systems. IEEE Transactions on Control Systems Technology 28, 2 (2020), 489–504.
[31]
Konrad Reif. 2014. Brakes, Brake Control and Driver Assistance Systems. Springer Vieweg, Weisbaden, Germany.
[32]
Sergio M. Savaresi and Mara Tanelli. 2010. Active Braking Control Systems Design for Vehicles. Springer Science & Business Media, New York.
[33]
Danbing Seto and Lui Sha. 1999. An engineering method for safety region development. Carnegie Mellon University, Software Engineering Institute, Technical Report CMU/SEI-99-TR-018. 1–39.
[34]
Lui Sha. 2001. Using simplicity to control complexity. IEEE Softw.4 (2001), 20–28.
[35]
Xiaoqiang Sun, Yingfeng Cai, Shaohua Wang, Xing Xu, and Long Chen. 2019. Optimal control of intelligent vehicle longitudinal dynamics via hybrid model predictive control. Robot. Auton. Syst. 112 (2019), 190–200.
[36]
Meihua Tai and Masayoshi Tomizuka. 2000. Robust longitudinal velocity tracking of vehicles using traction and brake control. In Proceedings of the 6th International Workshop on Advanced Motion Control. 305–310.
[37]
Xiaofeng Wang, Naira Hovakimyan, and Lui Sha. 2018. RSimplex: A robust control architecture for cyber and physical failures. ACM Trans. Cyber-Phys. Syst. 2, 4 (2018).
[38]
Dejun Yin, Sehoon Oh, and Yoichi Hori. 2009. A novel traction control for EV based on maximum transmissible torque estimation. IEEE Trans. Industr. Electr. 56, 6 (2009), 2086–2094.

Cited By

View all
  • (2024)Synergistic Perception and Control Simplex for Verifiable Safe Vertical LandingAIAA SCITECH 2024 Forum10.2514/6.2024-1167Online publication date: 4-Jan-2024
  • (2024)CASTNet: A Context-Aware, Spatio-Temporal Dynamic Motion Prediction Ensemble for Autonomous DrivingACM Transactions on Cyber-Physical Systems10.1145/36486228:2(1-20)Online publication date: 15-May-2024
  • (2024)Safety-Critical Containment Control for Multi-Agent Systems With Communication DelaysIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.340159211:5(4911-4922)Online publication date: Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Cyber-Physical Systems
ACM Transactions on Cyber-Physical Systems  Volume 7, Issue 1
January 2023
187 pages
ISSN:2378-962X
EISSN:2378-9638
DOI:10.1145/3582896
  • Editor:
  • Chenyang Lu
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 20 February 2023
Online AM: 19 September 2022
Accepted: 14 September 2022
Revised: 25 January 2022
Received: 15 July 2021
Published in TCPS Volume 7, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Simplex
  2. model learning
  3. model switching
  4. 1 adaptive controller
  5. safe velocity regulation
  6. traction control system
  7. anti-lock braking system

Qualifiers

  • Research-article

Funding Sources

  • NSF

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)706
  • Downloads (Last 6 weeks)90
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Synergistic Perception and Control Simplex for Verifiable Safe Vertical LandingAIAA SCITECH 2024 Forum10.2514/6.2024-1167Online publication date: 4-Jan-2024
  • (2024)CASTNet: A Context-Aware, Spatio-Temporal Dynamic Motion Prediction Ensemble for Autonomous DrivingACM Transactions on Cyber-Physical Systems10.1145/36486228:2(1-20)Online publication date: 15-May-2024
  • (2024)Safety-Critical Containment Control for Multi-Agent Systems With Communication DelaysIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.340159211:5(4911-4922)Online publication date: Sep-2024
  • (2024)Perception simplex: Verifiable collision avoidance in autonomous vehicles amidst obstacle detection faultsSoftware Testing, Verification and Reliability10.1002/stvr.1879Online publication date: 28-May-2024
  • (2023)Physics-Model-Regulated Deep Reinforcement Learning Towards Safety & Stability Guarantees2023 62nd IEEE Conference on Decision and Control (CDC)10.1109/CDC49753.2023.10383560(8306-8311)Online publication date: 13-Dec-2023
  • (2022)A Discrete Fractional Order Adaptive Law for Parameter Estimation and Adaptive ControlIEEE Open Journal of Control Systems10.1109/OJCSYS.2022.31850021(113-125)Online publication date: 2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media