skip to main content
research-article
Public Access

Adaptive Driving Assistant Model (ADAM) for Advising Drivers of Autonomous Vehicles

Published: 26 July 2022 Publication History

Abstract

Fully autonomous driving is on the horizon; vehicles with advanced driver assistance systems (ADAS) such as Tesla's Autopilot are already available to consumers. However, all currently available ADAS applications require a human driver to be alert and ready to take control if needed. Partially automated driving introduces new complexities to human interactions with cars and can even increase collision risk. A better understanding of drivers’ trust in automation may help reduce these complexities. Much of the existing research on trust in ADAS has relied on use of surveys and physiological measures to assess trust and has been conducted using driving simulators. There have been relatively few studies that use telemetry data from real automated vehicles to assess trust in ADAS. In addition, although some ADAS technologies provide alerts when, for example, drivers’ hands are not on the steering wheel, these systems are not personalized to individual drivers. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams. In this paper, we propose an architecture for adaptive autonomous driving assistance. Two layers of multiple sensory fusion models are developed to provide appropriate voice reminders to increase driving safety based on predicted driving status. Results suggest that human trust in automation can be quantified and predicted with 80% accuracy based on vehicle data, and that adaptive speech-based advice can be provided to drivers with 90 to 95% accuracy. With more data, these models can be used to evaluate trust in driving assistance tools, which can ultimately lead to safer and appropriate use of these features.

1 Introduction

According to the National Highway Traffic Safety Administration (NHTSA), the number of auto accidents has increased annually from 5.419 million in 2010 to 6.756 million in 2019—a 20% increase over the last 10 years. Of the 6.756 million accidents in 2019, 41% resulted in injuries or fatalities [1]. This trend is likely to continue to as the number of vehicles on the road increases.
Researchers have used a variety of approaches to investigate causes of auto accidents. For example, in a 1980 report, Treat [2] examined over a five-year period how frequently various human, environmental, and vehicular factors were involved in traffic accidents by studying 13,568 police-reported accidents, of which 2,258 were investigated on-scene by technicians and 420 by a multidisciplinary team. Human errors were identified as definite causes in 70.7% of the accidents, environmental factors in 12.4%, and vehicular factors in 4.5%. In 20% of the cases, no definite cause was identified. A taxonomy of direct human causes was developed based on an information-processing model of the driver as a vehicle controller. Singh [3] analyzed data from 5,470 crashes over the period from July 3, 2005 to December 31, 2007. Driver, vehicle, and environment-related information were collected at crash scenes as part of the National Motor Vehicle Crash Causation Survey, which was conducted by the U.S. National Highway Transportation Safety Administration (NHTSA). The last event in the crash causal chain—a.k.a, the critical reason for the crash—was attributed to the driver in 94 percent (±2.2%) of the crashes, to failure or degradation of a vehicle component in 2 percent (±0.7%) of the crashes, and to the environment (slick roads, weather, etc.) in 2 percent (±1.3%) of crashes. Recognition errors accounted for about 41 percent (±2.1%), decision errors 33 percent (±3.7%), and performance errors 11 percent (±2.7%) of crashes. Dingus et al. [4] used a naturalistic driving dataset of 905 injurious and property damage crashes; they found driver-related factors—such as error, impairment, fatigue, and distraction—in almost 90% of crashes.
Environmental factors including weather conditions (rain, sleet, snow, and fog) and road pavement conditions (wet, snowy/slushy, or icy) can cause major accidents. On average, nearly 5,000 people are killed and over 418,000 people are injured in weather-related crashes each year [5]. According to NHTSA statistics, the top environmental factors leading to collisions are slick roads (50%) and glare (17%) [3].
In short, major causes of accidents include human error and environment factors, with human error sharing about 71–90% [25]. Major categories of human error factors include speeding, distraction, fatigue, and drunk driving. Major categories of environmental factors include road conditions and weather conditions [5]. In this study, we broaden the scope of distraction to include fatigue and drunk driving, and broadened road conditions to include weather conditions, since snow and rain contribute to degradation of road conditions. Therefore, speeding, distraction, and road conditions were considered primary factors in designing the driving advisor tool described in this study.

2 Literature Review

This section briefly reviews research on advanced driver assistance technologies. In addition, because successful use of these technologies requires human drivers to have appropriate levels of trust in the technology, research related to human drivers’ interactions with driver assistance systems is also reviewed.
Advanced driver assistance systems. Advanced driver assistance systems (ADAS) are active automotive safety systems that utilize advanced sensors such as cameras, radar, lidar, and map databases, comprised of a hardware layer for sensing and a software layer of intelligence for post processing and decision making [6]. ADAS are often classified by the level of automation they achieve, using the Society of Automotive Engineers (SAE) scale, which ranges from 0 (No automation) to 5 (Full Automation) [67]. While there has been significant research and development (R&D) activity at levels 4 and 5, most ADAS systems currently in the market are between levels 2 and 3; [6, 8].
Yi et al. [9] suggest that driving assistance systems can be classified into three categories: (1) safe driving systems—such as adaptive cruise control, lane keeping, collision avoidance—which focus on the vehicle; (2) driver monitoring systems, which monitor drivers and warn them about abnormal driving behaviors and cognitive states; and (3) in-vehicle information systems that provide information and services for the driver, such as directions and traffic conditions. These applications have been implemented using a variety of technologies for sensing and perception (cameras, radar, lidar) and decision-making (artificial intelligence, machine learning and data fusion) [1015].
The bulk of ADAS efforts reported in the literature have focused on enhancing vehicle capabilities with a view toward achieving level 5. However, all currently available ADAS applications require a human driver to be alert and ready to take control if needed. It appears humans will continue to be involved in driving for the foreseeable future. However, increasing levels of driving automation introduces new complexities to human interactions with cars and can be a double-edged sword [1618]. For example, studies with level 3 vehicles have found that situations in which the driver must manually take over control from the automated mode increase collision risk with surrounding vehicles [13].
ADAS technologies often provide other types of driver assistance in addition to autonomous driving. For example, Tesla's Model X emits a tone or beep to alert drivers when their hands are not on the steering wheel. Honda and Jaguar have projects to detect a driver's mental state based on factors such as facial expressions, voice, heart rate, and respiration rate [9]. However, Yi et al. [9] note that these systems are generic—based on models developed from behavior of many different drivers—not personalized to individual drivers. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams. For example, one way of assisting drivers is to provide adaptive speech-based advice as needed, such as telling the driver to speed up, slow down, or stop. This guidance can be based on external factors (such as road or weather conditions), vehicle factors (such as speed and lane keeping), and indicators of the driver's internal state (such as fatigue). Trust in the ADAS can also be a consideration.
Trust. A fundamental issue affecting human interactions with autonomous vehicles is trust [19]. To successfully interact with an ADAS, a human driver needs to have an appropriate level of trust in the system, known as calibrated trust [20]. Too much trust can cause the human to fail to intervene when the system is performing incorrectly. Too little trust fails to leverage the benefits of the system. If human involvement is required, an ADAS should be able to assess how much trust the driver/operator is placing in the system and to consider trust in determining how to provide driver assistance.
Learned trust is a construct that captures how trust evolves over time from initial introduction of an agent to experiencing interaction with an agent to longer‐term interactions [21]. Situational trust is the construct that captures how trust changes based on the external environment (i.e., road types, road conditions, traffic, weather) and internal dynamic characteristics of the operator (mood, attentional capacity, self‐confidence) [21]. Recently, learned and situational trust were specifically mapped onto measures of automated driving [19].
New research has also more carefully mapped how specific task behaviors such as operator interventions and verification and response time during automation use can correspond to trust behaviors [22]. Specifically to the driving domain, braking is a common way to disengage automated driving and parking and thus can serve as an indicator of distrust. Researchers have further confirmed this in the lab by using braking frequency and magnitude as indicators of distrust in automated driving styles [23]. Using a real Tesla vehicle, researchers have used driver interventions, through braking, to show that distrust decreased with multiple uses of the automated parking system [24] and how distrust decreased by showing how the system worked compared to being told how it worked [25].
Previous research has extensively modeled the relationship between trust, reliance, and ultimate use of automated and robotic systems [16, 2022, 2427]. Early work focused on the relationship between machine accuracy, operator self-confidence, reliance, and trust [16, 17, 2831]. For example, relative trust (trust – self-confidence) was shown to predict reliance on the automation [29]. Other work showed that trust in the system was higher when automation is more accurate or reliable [3032]. Dynamic models of trust calibration mapping different stages of interaction prior to interaction and during interaction have been carefully mapped out [20, 21, 27]. Subsequent work endeavored to chart the antecedents of trust in automated and robotic systems broadly classifying important factors including the machine, the human and the environment and context [26, 33, 34]. Critically, recent work has provided a direct mapping of theoretical trust concepts originally conceived of by Mayer et al. [35] directly onto the many measures of trust—including self-report, behavioral and physiological indices—that have developed over the last several decades. As indicators of risk taking, behavioral measurements—such as interventions, verification behaviors, reliance, and response time—can be indicators of the trust relationship [22]. A more tailored approach specifically to trust in automated driving was recently detailed [19].
Assessing trust. A number of techniques have been developed to measure trust in automated systems such as self-driving vehicles and robots [22, 30, 33, 36, 37]. Survey instruments that collect self-reports of perceptions of trust—such as the Trust Of Automated Systems Test (TOAST) [34] and the Multi-Dimensional Measure of Trust (MDMT) [38, 39]—have been most commonly used. Physiological measures such as eye-tracking, EEG, and galvanized skin response have also been used [24, 25, 40, 41]. In addition, when using a vehicle (rather than a simulation), telemetry data such as location, turning, braking, acceleration, and lane keeping can be used to assess driving behavior. Vehicle telemetry data is the most ecologically valid way of collecting data related to trust, but as of this writing, there have been relatively few reports of telemetry data being used to assess trust in ADAS.
Trust in automation of autonomous vehicles can be described for individual features such as autopilot, cruise control, turning, braking, acceleration, and lane keeping. The Tesla Model X controller records and broadcasts this type of information in real-time via its Controller Area Network (CAN) bus architecture. These data can be accessed via an On-Board Diagnostics (OBD) port in real time and used to assess if a driver is under or over- trusting their vehicle's capabilities. Sensory devices, such as Tobii eye tracker or Mobileye vision system, can also be used to detect lanekeeping and distracted driving [19, 42].
Situation awareness. Situation awareness is an important factor that determines driving performance. For example, a recent computational approach for modeling driver's intent using naturalistic driving data demonstrated that lane change performance improved for drivers that checked their mirrors for more than six seconds [43], an approximate measure for situation awareness. Driving with automated vehicles might raise unique issues such as drivers finding themselves “out of the loop” [44] or with situation awareness being affected while driving with different levels of automated assistance [45]. A review found that situation awareness can deteriorate during adaptive cruise control and highly automated driving when engaged in non-related driving tasks but can improve if drivers are motivated, instructed to pay better attention, or receive feedback [46]. More recent work, investigating driving with real automated vehicles on the road, has demonstrated reduced situation awareness of the automated vehicle, increased complacency, and over-trust in automation [45, 47].
Summary. The majority of ADAS efforts reported in the literature have focused on enhancing vehicle capabilities with a view toward achieving fully automated driving. However, all currently available ADAS applications require a human driver to be alert and ready to take control if needed. Partially automated driving introduces new complexities to human interactions with cars and can even increase collision risk. A better understanding of drivers’ trust in automation may help reduce these complexities.
Techniques for measuring trust in automated systems include use of surveys to collect self-reports of perceptions of trust; use of physiological measures such as eye-tracking, EEG, and galvanized skin response; and, in the case of autonomous driving, use of vehicle telemetry data such as location, turning, braking, acceleration, and lane keeping. Although vehicle telemetry data is the most ecologically valid way of collecting data related to trust, there have been relatively few reports of vehicle telemetry data being used to assess trust in ADAS. Needed is research on the feasibility of using vehicle telemetry data to understand the driver's state of mind.
In addition, although some ADAS technologies provide other types of driver assistance—such as a tone or beep to alert drivers when their hands are not on the steering wheel—these systems are not personalized to individual drivers. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams.
The objectives of this research are to (1) identify sensory information and vehicle telemetry data needed to increase driving safety; (2) propose an architecture for an adaptive assistant that can provide verbal guidance to drivers of autonomous vehicles; (3) develop multi-stage sensor fusion models to provide adaptive assistance for drivers; (4) evaluate the models using in-field and simulated data; and (5) suggest future work on adaptive assistance for drivers of autonomous vehicles.

3 Architecture for Adaptive Autonomous Driving Advisor

The overall goal of this research is to develop an Adaptive Autonomous Driving Advisor (AADA) that can provide adaptive speech-based advice as needed, such as telling the driver to speed up, slow down, or stop. AADA will be built upon an existing data acquisition and measurement system called Ergoneers. Ergoneers is a custom-built PC-based system that includes multiple communication ports as well as CAN bus ports. Sensory devices such as GPS, Tobii eye tracker, Mobileye Camera, and the Tesla CAN bus cable can be integrated for acquisition of both vehicle data and driver physical data such as eye and head movements.
AADA will be based on the factors identified in the Introduction: Speed, Road Conditions, Distraction, and Trust. To acquire this information, data from Tesla's CAN bus, GPS, Tobii eye tracker, and Mobileye camera will be utilized and integrated to trigger the AADA to provide appropriate voice instructions via a multistage modeling approach. Figure 1 outlines which sensors will provide which kinds of measurements and information and how the information will be fused via Stage I and Stage II models. Stage I will include four models: a linear model to measure the speed condition (i.e., speeding, normal, or below speed limit), two weighted utility models to predict road conditions and driver distraction, and an Artificial Neural Network/Support Vector Machine/Random Forest (ANN/SVM/RF) model to predict trust. Stage II will use an ANN/SVM/RF model to integrate the outputs from the Stage I models and trigger appropriate voice instructions. In this paper, we focus on the development of the adaptive ANN/SVM/RF models used in Stages I and II.
Fig. 1.
Fig. 1. Architecture of adaptive autonomous driving advisor.
The ultimate goal is to develop an adaptive sensor fusion algorithm that can improve its performance as the amount of data increases, since the Adaptive Autonomous Driving Advisor can be used by the same individual over time. Artificial neural networks (ANN) and support vector machines (SVM) were used to develop the machine learning algorithms due to their ability and cost efficiency when handling regressions with high dimensional, non-linear, covariant inputs [4952]. The Random Forest (RF) method was also used due to its reputation for robustness to real-world data.
Artificial neural networks are widely recognized supervised learning machine algorithms that can be used to correlate hypernonlinear problems [53, 54]. However, ANN models have some drawbacks. Most notably, they require considerable training time to make accurate predictions, and they typically fail to measure “unknown” data due to their stochastic nature [55, 56]. Therefore, a deterministic non-linear regression method may be preferred when limited training data are available.
Support vector machines, another family of supervised learning models used in classification and regression analyses, are deterministic [57, 58]. The support vector machine is intended to be a robust tool for classification and regression in noisy, complex domains. The two key features of support vector machines are generalization theory, which leads to a principled way to choose a hypothesis; and kernel functions, which introduce non-linearity in the hypothesis space without explicitly requiring a non-linear algorithm [59].
A principal difference between SVMs and ANNs lies in risk minimization mechanics [6062]: SVMs employ the structural risk minimization (SRM) principle to minimize an upper bound on the expected risk, whereas ANNs apply traditional empirical risk minimization (ERM) to training data. In several fields [49, 55, 56, 6065], SVM models are more robust and deterministic than ANNs while SVM predictions are comparable to ANN results. However, SVM model accuracy levels depend heavily on the experimental data used.
Random Forest is an ensemble machine learning method that is a collection of individual decision trees, resulting in many predictions; the majority of the predictions is used to obtain the final prediction [50, 51]. Decision trees split on features to create decision boundaries based on the gini impurity measure, a popular algorithm to optimally split nodes. Each decision tree randomly permutes the order of features, resulting in different best splits to create distinct trees with their respective predictions. Random Forests inherently perform feature selection as well as reduce overfitting on the training set due to taking a majority vote across all the decision trees. In addition, RF methods require relatively little configuration to obtain a high accuracy.

4 Stage I Model Development: Predicting Trust in Automation

The focus of the Stage I model development process was on developing an ANN/SVM/RF model for using vehicle data to predict a user's trust in automation. Hoff & Bashir propose a framework consisting of three types of trust: Dispositional Trust, Learned Trust, and Situational Trust [21]. Madison et al. describe how Hoff & Bashir's framework might be applied in the context of driving automation [19]. Dispositional trust considers user characteristics such as age, personality, tendency to take risks, and attitudes toward automation. Learned trust is trust based on past experience with a specific system. Situational trust varies based on the external environment and the internal state of the driver. For example, situational trust in an ADAS might vary based on the driver's perception of the vehicle's ability to perform under certain driving scenarios. As part of the modeling process, we also conducted experiments in realistic driving conditions with a Tesla Model X to assess the applicability of Hoff & Bashir's framework within the context of driving automation.

4.1 Experiment Setup

In June-July 2021, nine subjects participated in designed experiments. Subjects were males between the ages of 18–22 with no prior experience with the Tesla autopilot. The Tesla vehicle was a Model X version 2021.4.18. Each subject had three drives along the same route. For the first two drives, one drive had to be Manual and the other had to be Autopilot; the sequence was left up to the driver. For the third drive, the driver was allowed to choose either Autopilot or Manual mode. While in the Autopilot mode, subjects could disengage and re-engage the Autopilot if desired; the Manual mode was manual only. A manual driving mode was included to create a baseline for individual driving performance in the Tesla. This then presents a way to compare autopilot + human driver performance to human driver performance alone.
The experiment tasks included (1) complete preliminary individual attribute surveys, including the Trust Of Automated Systems Test (TOAST), before driving, (2) drive the Tesla around a designated loop-shaped route three times (Figure 2) using one of three modes (Manual, Autopilot, and Driver's Preference) each time; (3) complete surveys at the end of each loop, including the Multi-Dimensional Measure of Trust (MDMT); and (4) complete a post drive questionnaire at the end of the drive. The post-drive questionnaire asked participants to rate how much they trusted the Autopilot feature, which was considered to indicate trust in automation.
Fig. 2.
Fig. 2. Relationship between theoretical constructs and measures of trust. Adapted from Madison et al. [19].
Figure 2 shows the learned trust associated with the three different drives and situational trust along the driving path for several different driving situations (downhill, straight line, turn, and curve). The goal was to use vehicle data about driving behavior under different drives and situations to model a driver's level of trust in automation. Results from TOAST were used to assess dispositional trust; results from the MDMT and the post-drive questionnaire were used to assess learned trust; and comparisons of driving data under different road conditions were used to assess situational trust.

4.2 Modeling Process

The modeling process was as follows:
Identify which of the self-report trust measures best indicate Trust in Automation.
Identify vehicle data that strongly correlate with the Trust in Automation measures.
Fit the data into a distribution and generate data for modeling and testing.
Develop and evaluate model to predict Trust in Automation based on the identified effective attributes.
Further evaluate model accuracy using field data and other tests.
Following are the experiments, analysis, and modeling efforts based on the procedure described above.
Step 1. Identify which of the self-report trust measures best indicate Trust in Automation. To identify which self-report measures best indicate Trust in Automation, we first computed correlation coefficients between the Multi-Dimensional Measure of Trust (MDMT) survey administered at the completion of each loop and the post-drive questionnaire. The MDMT measures 16 attributes of trust [39]. The scale is divided into two major constructs: capacity trust and moral trust. Capacity trust has two subscales: reliable and capable. The reliable subscale has four attributes including reliable, predictable, someone you can count on, and consistent. The capable subscale has four attributes including capable, skilled, competent, and meticulous. Moral trust has two subscales: ethical and sincere. The ethical subscale has four attributes including ethical, respectable, principled, and integrity. The sincere subscale has four attributes including sincere, genuine, candid, and authentic. The breakdown and alphas for each dimension can be found in Ullman and Malle [48]. The post-drive questionnaire asked subjects to rate their Trust in Automation using a Likert scale.
Table 1 shows the correlations between the post-drive trust rating and each of the 16 MDMT attributes. Note that the MDMT uses a 7-point Likert scale and the post-drive questionnaire uses a 5-point Likert scale. The 16 MDMT attributes can be grouped into four sub-scales: Reliable, Capable, Ethical, and Sincere [39]. Table 2 shows the correlations between the four subscale values and the trust rating from the post drive questionnaire.
Table 1.
AttributePost Drive Trust Rating
Reliable0.708
Sincere0.282
Capable0.383
Ethical0.299
Predictable0.515
Genuine0.462
Skilled0.518
Respectable0.249
Someone you can count on0.475
Candid0.224
Competent0.457
Principled0.478
Consistent0.606
Authentic0.232
Meticulous0.568
Integrity0.591
Table 1. Correlations Between the 16 MDMT Attributes and Post-Drive Trust Rating
Table 2.
 ReliableCapableEthicalSincere
Post Drive0.61090.5610.5830.459
Mean5.65.415.124.15
SD1.2791.6151.5311.663
Table 2. Post Drive Trust Rating, Mean, and Standard Deviation for Four Combined MDMT Subgroups
Results suggest that both the individual attribute Reliable and the Reliable subscale assessed during the evaluation points are moderately correlated to the self-assessment of Trust in Automation in the post drive questionnaire. The Ethical subscale also showed some degree of correlation, but since the correlation coefficient for Reliable was higher than Ethical, only the data for Reliable (from the evaluation points and post drive) was considered as a measure of performance (independent variable) in formulating the model to predict Trust in Automation.
Step 2. Identify vehicle data that strongly correlate with the Trust in Automation measures. Vehicle data were collected in real-time from the Tesla Model X CAN bus via the OBD port of a custom-built Ergoneer data acquisition system. Each experiment ran for about two hours with approximately one hour of vehicle data. The data file contains information for 143 different attributes; each data file contains approximately 1.5 million records related to the vehicle. Processing the data to identify when each major event happened is a very challenging task. We first used a bird's eye view approach to recognize major events such as the starting and stopping time of each drive when the auto-pilot is On or Off. Figure 3 shows GPS data of the driving route and a bird's eye view of the data for autopilot, speed, braking, and distance to the lane line to the left of the vehicle, which was used to track the vehicle's lane keeping performance.
Fig. 3.
Fig. 3. Route coordinates (GPS).
This view can reveal a variety of information, including when an event (such as enabling Autopilot for the drive) starts and stops; and the length, frequency, and variation of each event. For example, a value is 3 when the autopilot is On and 2 when it is Off; so from the Autopilot plot in Figure 4, we can determine that there were three different drives, that the first and third drives were in autopilot mode, and that the first drive lasted about 15 minutes. From the Distance to Left plot, we can see how much a driver deviates from the lane line to the left of the vehicle. By examining the band width and shape, we can determine if the vehicle is driving in a straight line. If a driver does not have good control of the vehicle, the band will fluctuate a great deal. For example, in the plot shown in Figure 4, the driver consistently shifts to the right.
Fig. 4.
Fig. 4. State of Autopilot, Speed, Brake, and Distance to the Left lane over time for one subject.
To identify the vehicle data that correlate strongly with the Trust in Automation performance measure, correlation coefficients were calculated to determine input candidates for the prediction model. There were three driving modes (Manual, Autopilot, Driver's Choice) and four driving situations (straight line, downhill, curve, turn) that were recorded for each drive. Although the Tesla CAN bus broadcasted data for 143 attributes, not all of the attributes were considered useful for modeling purposes. Therefore, for each situation (e.g., downhill driving), data from only six attributes were collected: number of times brakes were applied, average braking time, average speed, average speed standard deviation, average distance to the left line, and standard deviation of distance to the left line. The 24 attributes (delineated in Table 3) were computed for each driving attempt in this study.
Table 3.
 Straight line (S)Downhill (D)Curve (C)Turn (T)
No. of times brakes were applied (Bn)SBnDBnCBnTBn
Average braking time in secs. (Bl)SBlDBlCBlTBl
Average speed in mph (Sa)SSaDSaCSaTSa
Average speed std dev (Ss)SSsDSsCSsTSs
Average distance to the left line in cm (La)SLaDLaCLaTLa
Std dev of distance to the left line (Ls)SLsDLsCLsTLs
Table 3. Driving Measurements Collected by Vehicle Data Attributes and Driving Situations
There were six viable datasets, each with two attempts in Autopilot mode, for a total of 12 Autopilot attempts. For each driving attempt, the correlation coefficient between each of the 24 attributes and the subject's self-assessment of Trust in Automation were calculated. Table 4 shows the 7 vehicle attributes out of the 24 that yielded moderate to strong correlations with the self-assessment of Trust in Automation. For development of the prediction model, we used data from the top three attributes—DSa (Downhill Speed average), TLs (Turn Length standard deviation), and CSs (Curve Speed standard deviation)—as inputs, because these attributes all correlated strongly with the independent variable of Trust in Automation (based on the MDMT Reliability sub-scale).
Table 4.
SubjectModeTrustDSaTBlTSaTLaTLsCSaCSs
21191817Auto5.7543.912.9830.18−0.180.0940.473.09
21164616Auto5.2544.6611.9427.36−0.180.0442.474.88
21199974Auto4.0043.355.0129.38−0.100.2544.760.90
21131964Auto6.2544.856.2429.15−0.180.0642.244.66
21160564Auto7.0044.7714.5925.48−0.180.0323.7110.16
21191823Auto6.7544.8321.6124.54−0.180.0739.738.05
Correlation Coefficient 0.820.59−0.63−0.79−0.81−0.680.89
Table 4. Correlation Coefficients Among Trust in Automation Assessment and Vehicle Attribute Measurements
Step 3. Fit the data into a distribution and generate data for modeling and testing. Because the number of viable data sets was limited, we first fit the data identified in Step 2 into distributions, then generated data needed for modeling. The vehicle data captured while the driver was in Autopilot mode were fit into a suitable distribution and validated based on the Lilliefors test. The Lilliefors test, which is based on the Kolmogorov–Smirnov test, is used for small data sets (less than 25). Following is a summary of the process for each attribute of interest and performance measures.
(a)
Trust in Automation: The top two distribution fitting candidates were Lognormal and Normal distributions. Since the lower and upper bounds range from 1 to 7, the suggested distribution was further adjusted to Normal (4,0.95) which covers approximately 99.73% of the population.
(b)
Downhill Speed Average (DSa): The top distribution fitting candidate was Normal (44.40,0.62).
(c)
Turn Length standard deviation (TLs): The top distribution candidate was Pareto (1.3657,0.032478).
(d)
Curve Speed standard deviation (CSs): The top two distribution candidates were ExtValue (3.8081,2.6365) and Normal (5.2911,3.3464). We chose the normal distribution for this study because we believe speed has variations caused by the subject as well as other factors such as road conditions.
Step 4. Develop and evaluate model to predict Trust in Automation based on the identified effective attributes. Nonlinear machine learning modeling techniques (ANN, SVM, and RF) were applied to model and predict a user's Trust in Automation based on vehicle data. Model effectiveness was then evaluated by randomly dividing the data into testing and evaluation sets. For the ANN model, we applied a 3-2-1 ANN topology with DSa, TLs, and CSs as the three input nodes, two hidden layers, and one output node, which was Trust in Automation (TIA). Table 5 shows the results from the TIAANN, TIASVM, and TIARF models. The prediction accuracies for TIAANN and TIASVM were very close, but the accuracies of TIARF were significantly lower. The accuracy did not increase much as sample size increased to 1,000 data sets. The accuracy is defined as 1- ((TPi – TTi)/TTi)%, where TPi is the predicted trust value for data sample I and TTi is the target trust value for data sample i. The accuracy value in the tables below is the average accuracy of a given set of samples (such as 125 data sets).
Table 5.
 125 data sets250 data sets500 data sets1000 data sets
TIAANN0.777380.761510.763860.78691
TIASVM0.778040.799710.785310.78722
TIARF0.432000.456000.398000.39499
Table 5. Prediction Accuracy of TIAANN, TIASVM, and TIARF Models for Different Sample Sizes
Step 5. Further evaluate model accuracy using field data and other tests. The prediction accuracy of the designed models was further evaluated using other available field data and tests of robustness.
Manual and Driver Preference modes. The above models were built using data generated by the subjects when they are driving in Autopilot mode, because the Autopilot data are likely different from the data collected in the Manual and Driver Preference modes (denoted as M&D). For example, there could be a potential learning effect on driver performance and perception of vehicle capability when using Autopilot for a second time. However, the M&D data were used to evaluate the noise tolerance (robustness) of the developed model. Table 6 shows the testing results, which suggest the models are noise tolerant. In particular, the TIAANN model performed better than the TIASVM and TIARF models.
Table 6.
 M&DM&D + 125 Data Sets
TIAANN0.722110.76997
TIASVM0.669040.76316
TIARF0.333330.40159
Table 6. Test of Model Accuracy with Field Data from Manual and Driver Preference Modes
Noise Tolerance and Sensitivity Analysis. To test the robustness of the developed models, we arbitrarily added noise to the data to see how the models would respond in terms of accuracy. Noise was added by multiplying all the original values in the dataset by 5%, 10%, or 15%. For example, a 5% noise level was added using the following equation: Value 5% noise = (original value) * 1.05. In other words, the noise values deviate 5% from the original values. This level of noise was added to every single data point in the data set; therefore, all the data used for training, testing, and validation had the same noise level. Table 7 shows the results for the two models. Results suggest only 1% to 2% difference when noise level increases to 15%. In one case, the accuracy increased as the noise level increased. This suggests that the introduction of noise into the data sets turned out to fit the underlying data distribution.
Table 7.
 No noise added+ 5% Noise+ 10% Noise+ 15% Noise
TIAANN (125 data sets)0.777380.758710.748490.75871
TIASVM (125 data sets)0.778040.778040.778070.75724
TIARF (125 data sets)0.432000.423990.416000.392
TIAANN (250 data sets)0.761510.796250.76940.79625
TIASVM (250 data sets)0.799710.799710.813960.79971
TIARF (250 data sets)0.456000.467990.412000.45599
TIAANN (500 data sets)0.763860.767910.77260.76791
TIASVM (500 data sets)0.785310.785310.790860.78531
TIARF (500 data sets)0.398000.396000.395990.37600
TIAANN (1000 data sets)0.786910.77230.786210.77219
TIASVM (1000 data sets)0.787220.787630.787630.78722
TIARF (1000 data sets)0.394990.394600.396990.39200
Table 7. Noise Tolerant Property Testing Results
Use of additional vehicle attributes. We explored further improving model accuracy by adding additional driving attributes into the modeling process. For example, Table 4 shows that the correlation between the average distance to the left line (TLa) and Trust in Automation was −0.79. Data from Autopilot mode on TLa, CSa (Curve-Speed Average), TSa (Turn-Speed Average), and TBl (Turn-Brake Length) were fitted into distributions and validated using the Lilliefors test. Statistical testing results suggested Expon(0.0172, −0.1793), ExtValueMin(41.6520, 3.8349), Pareto(0.96522, 2.9800), and Uniform(23.4088, 31.3075) were the top candidates for these attributes, respectively. Additional data sets were generated based on those distributions. Table 8 shows that when going from three to four attributes, the accuracy of the ANN model increased slightly. However, ANN accuracy did not continue to increase when using five, six, or seven attributes. Also, SVM accuracy decreased when going from three to four attributes. Because our ultimate goal is to process vehicle attribute data online in real time, we chose to use three attributes to reduce computational complexity.
Table 8.
 3 Attributes4 Attributes5 Attributes6 Attributes7 Attributes
TIAANN0.786910.792790.778940.780090.7828
TIASVM0.787220.779960.776290.776100.7727
TIARF0.394990.378990.388000.327000.3269
Table 8. Results Showing Attribute Effect on Accuracy

4.3 Stage I Findings and Discussion

Initially, we looked for time spent on autopilot and number of braking events as indicators of subjects’ trust in automation. However, these attributes did not strongly correlate with the individual assessment of trust on automation. This may have been because only 20% of the driving path consisted of downhill, curve, and turn situations (see Figure 3(a)). The trust signal was stronger when focusing only on sections of the driving path that require a greater cognitive load such as downhill, curve, and turns. Results suggest that the Downhill Speed Average (DSa), Turn Length standard deviation (TLs), and Curve Speed standard deviation (CSs) attributes yield stronger correlation coefficient with self-assessment of trust in automation. These attributes can potentially be used to continuously assess a subject's trust in automation as the number of driving attempts increases.
Of the three machine learning models developed, ANN and SVM yielded better accuracy than RF under a variety of conditions, including added noise, sample size variations, and tests with field data from the manual and driver preference modes.
The developed models (ANN, SVM, and RF) appear to be robust/fault tolerant. Even with added noise increases of up to 15%, accuracy was reduced only by 2% to 3%. in some cases, the accuracy increased 2% to 3%. When tested with data not previously seen by the models (from the manual and driver preference modes), the ANN model (0.72) performed better than SVM (0.66) and much better than RF (0.33).
The models need appropriate sample sizes. If the sample size is too small (less than 25), the models cannot find a good fit; therefore, the accuracy is not good. However, as sample size increases, model accuracy may not increase in proportion to the amount of data. Future research could include understanding how to find the right sample size when developing machine leaning models.
Accuracy did not improve more than 1% when the number of vehicle attributes (i.e., number of nodes) increased. This may be because the top three attributes had strong correlations with the Trust in Automation variable; whereas the other four attributes had relatively modest correlation coefficients with the self-assessment of trust in automation. Future research may include examining the extent to which these attributes are independent of one another in addition to further study of their correlation with the performance measure (trust in automation).
For purposes of providing personalized driving assistance in real time, using fewer attributes can be advantageous, in that computational requirements are reduced. Ultimately, vehicle attribute data may replace self-assessments of trust on the vehicle capability (learned trust). Future directions may include the use of dispositional trust to replace learned trust. A machine learning model such as ANN can be developed as the base model based on individual differences. The model can then improve itself as the individual drives more often and more data are generated. Therefore, the model becomes a personalized model that adapts and becomes smarter over time.

5 Stage II Model Development: Adaptive Driving Assistant Model (ADAM)

The focus of the Stage II model development process was on developing Adaptive Driving Assistant Models (ADAM) based on ANN/SVM/RF techniques that can integrate the outputs from the four Stage I models (in Figure 1) and trigger appropriate voice instructions.

5.1 Model Development

Following are the steps followed in developing the ADAM models: (1) classify risk factors into categories and levels; (2) identify sensory device(s) for use in detecting risk factors; and (3) develop sensor fusion algorithms to integrate sensory data.
Step 1: Classify risk factors into categories and levels. Based on the literature review [25], four categories of risk factors are considered: Speed, Distraction, Road Conditions, and Trust. Within each factor, there are three levels of severity: Over, Normal, and Under. For example, if the factor is Speed, the levels would be Over Speed, Normal, and Under Speed. Therefore, there are 81 possible combinations (3 × 3 × 3 × 3) to be considered in this study. In addition, there are five possible types of advice that the system can provide: Slow down, Speed up, Brake, Stop, and Nothing.
Step 2. Identify sensory devices for use in detecting risk factors. Sensory devices were designated for monitoring each factor. Some of the data are from Tesla's CAN bus and some are from external sensory devices. Table 9 shows devices used for each factor.
Table 9.
FactorsSensory devices
SpeedMobileye, GPS
DistractionHands on wheel, Cruise control, Tobii eye tracking
Road conditionTesla CAN Bus, Mobileye
Table 9. Devices Used for Each Factor
Step 3. Develop sensor fusion algorithms to integrate sensory data. For the ANN model, a 4 × 3 × 1 topology was used, representing four inputs, three hidden nodes, and one output. The four inputs are speed, distraction, road conditions, and trust, and the one output is the type of guidance to provide to the driver. Between the inputs and outputs are hidden layers that connect the input nodes to output nodes via linkages. Each linkage has a weight and function that can be used to recognize non-linear fitting. For the SVM model, a regression model built upon speed, distraction, road conditions, and trust data sets is employed to predict the type of guidance to provide to the driver.
The ANN model was built using the MATLAB Neural Network Toolbox TrainLM function, which is based on the Levenberg-Marquardt backpropagation algorithm.
The SVM regression model was trained and cross-validated using the FitrSVM function within MATLAB's Support Vector Machine (SVM) Toolbox. FitrSVM maps the predictor data using Radial Basis Function (RBF) kernel functions. An SVM model has two essential factors: cost (c) and gamma (g). Cost means tolerance of error, which determines the generalizability of the model. Gamma is a parameter of the RBF kernel, which is inversely proportional to the number of support vectors that affect training and prediction speed. To train an efficient model that will neither overfit nor underfit, the values of c and g must be kept within an appropriate range. Hence, grid-search and cross-validation (cv) are utilized to find the best c and g automatically. To initiate the grid-search, a set of c and g values should be designated for the parameters. Based on selected scoring standards, the best settings will be determined after exhausting all the various combinations of parameters. To prevent the model from becoming too complicated, which may lead to overfitting, cross-validation is implemented simultaneously with grid search. The training sets will be divided into several subsets randomly. One subset is designated as a training set for each round, and the others serve as validation sets. These two mechanisms (grid search and cross-validation) were combined to adjust the parameters to improve training efficiency and model performance.
Cross-validation was also utilized in training and tuning the Random Forest model. The max depth of the trees is determined post training. Setting a maximum depth of the decision trees prunes the leaves which reduces overfitting and potentially removes the influence of noise.
The procedure for producing the output node values for the ANN and SVM models was as follows:
(1)
For each input type, assign relative weight for each level of severity.
(2)
Calculate the accumulated weights for each possible outcome of the output node.
(3)
Fit the weights of each outcome into a distribution.
(4)
Redistribute data into groups based on the number of possible outcomes of the output node; so, the number of data groups on the histogram is equal to the number of possible outcomes of the output nodes. Each group of data represents the probability of each possible outcome.
Table 10 shows the weights assigned to each factor.
Table 10.
FactorAssociated Levels and Assigned Weights
Speed1 – Normal; 2 – Below; 3 – Over
Distraction1 – Normal; 2 – Light; 3 – Severe
Road condition1 – Normal; 2 – Poor; 3 – Worst
Trust1 – Normal; 2 – Partial; 3 – Complete
OutcomesNothing, Speed up, Slow down, Brake, and Stop
Table 10. Factors Involved, Associated Levels and Assigned Weights
Since four factors are considered and each factor has three levels, there are 81 possible combinations. After assigning weights to the severity of each factor, we can fit the overall value of each possible outcome into a distribution. At the same time, we can arrange the number of data groups into a histogram based on the number of predetermined outcomes, which is five in this case. As shown in Figure 5, a normal distribution with a mean and standard deviation of (7,1.4811) is a relatively good fit and happens to form five different groups with boundary values of 4.9, 6.35, 7.7, and 9.1. We can further normalize each outcome value between 0 and 1 using these group boundary values. The outcome values are calculated assuming Speed, Distraction, and Road condition factors are equally weighted plus half of Trust (since Trust is negatively correlated with the outcomes).
Fig. 5.
Fig. 5. Fitting of possible combinations of outcomes into a normal distribution.
Figure 6 shows the data distribution, which is calculated assuming the outcome is the summation of four equally weighted factors (Speed, Road Condition, Distraction, and Trust). The distribution of the 81 possible outcomes suggests a normal distribution of N (8,1.64), which covers the range of 4 to 12 with 97.5% of the population after consolidating nine groups of data into five groups. The data distribution resembles a uniform distribution of U(0.95, 5.05). Figure 7 shows the distribution after the groups were consolidated.
Fig. 6.
Fig. 6. Data distribution before reducing nine groups to five.
Fig. 7.
Fig. 7. Data distribution after reducing nine groups to five.

5.2 Model Evaluation

To evaluate the Stage II models, we used both simulated data and field data to represent the outputs from the four Stage I models. Two approaches were used—comprehensive and historical—based on the data source. In the historical approach, data from external sources are used to represent past and current situations. In the comprehensive approach, data are generated to simulate a broad spectrum of events. This approach can include rare events, allowing possible future events to be represented. Using these two approaches together allows us to thoroughly evaluate and assess the robustness of the proposed models.
Comprehensive. For the comprehensive approach, we fit each factor value into a distribution, and then generate more data based on the underlying distribution with revised parameters as needed. Figures 8, 9, and 10 show possible distribution candidates for fitting existing data and revision of the underlying distribution to generate additional data. These are Uniform (1,3), Normal (2,0.3), and Triangular (1,2,3) distributions.
Fig. 8.
Fig. 8. Example of fitted uniform distribution used for road condition.
Fig. 9.
Fig. 9. Example of fitted normal distribution used for speed.
Fig. 10.
Fig. 10. Example of fitted triangular distribution used for road condition.
Tables 11, 12, and 13 show the accuracy of ADAM's sensor fusion algorithm-based ANN, SVM, and RF methods under conditions of different distributions and number of data sets. Results suggest that (1) accuracy increases as sample size increases; (2) there is not much increase in accuracy after sample size reaches 500 data sets; (3) the algorithms perform a little better when data follows a normal distribution; and (4) ADAMANN provides more stable (less variation) accuracy. With smaller samples, ADAMANN performs slightly better than ADAMSVM but ADAMRF performs much worse.
Table 11.
 125 sets250 sets500 sets1000 sets
ADAMANN0.875470.87250.875810.89429
ADAMSVM0.872490.934590.886620.89344
ADAMRF0.593780.648940.786420.77374
Table 11. Accuracy Comparison Using ADAMANN and ADAMSVM with Normally Distributed Data Sets
Table 12.
 125 sets250 sets500 sets1000 sets
ADAMANN0.847860.866140.867590.87048
ADAMSVM0.808530.8610.868890.89068
ADAMRF0.727760.746340.750140.80184
Table 12. Accuracy Comparison Using ADAMANN and ADAMSVM with Uniformly Distributed Data Sets
Table 13.
 125 sets250 sets500 sets1000 sets
ADAMANN0.846440.858220.864590.87621
ADAMSVM0.805880.871130.851410.86823
ADAMRF0.650340.690960.741130.78097
Table 13. Accuracy Comparison Using ADAMANN and ADAMSVM with Triangularly Distributed Data Sets
Historical. In this approach, we use historical data collected by government agencies, insurance companies, and third-party research foundations representing events in the U.S. to generate Speed, Distracted Driving, and Road Condition data for the input nodes of the ADAMANN, ADAMSVN, and ADAMRF algorithms. To generate data on the Trust in Automation factor, we rely on research findings from Dikmen & Burns [66].
Dikmen and Burns [66] reported that a survey of Tesla drivers was conducted to ask about their confidence in the Autopilot and common features. Overall, participants reported high levels of trust in Autopilot (M = 4.02, SD =.65) and moderate levels of initial trust (M = 2.83, SD = .82) on 5-point Likert scales. Trust in Autopilot was positively correlated with frequency of Autopilot use, self-rated knowledge about Autopilot, ease of learning, and usefulness of Autopilot displays.
Table 14 shows categories of factors and the frequency with which they affect driving, according to a 2016 survey by AAA [67]:
Table 14.
FactorGroup, Frequency, and Approximation 
Impaired driving statisticsGroup A 
CategoryFrequencyApproximation
Alcohol level too high1/8, last 12 months, 9% more than once over the past year1/8, 0.09 × 2 = 0.18
Distribution0.18 
   
Distracted driving statisticsGroup B 
CategoryFrequencyApproximation
Talking on a cell phone2/3 in last 30 days; 33% regularly1/3 + 2/3 /12 = 0.389
Reading a text message or email2/5 in last 30 days; 12% regularly2/5/12 + 0.12 = 0.153
Typing or sending a text or email1/3 in last 30 days; 8% regularly1/3/12 + 0.08 = 0.108
DistributionRanges from 0.108 to 0.389; Average of 20% 
   
Drowsy driving statisticsGroup C 
CategoryFrequencyOverall approximation
TiredCould not keep eyes on the road last month, 20% regularly1/5 = 0.20
Distribution0.2 
   
Overall DistributionA: 18%, Bavg: 20%, C to D (normal): 62% 
Table 14. Statistics About Various Factors Related to Distracted Driving (Based on [66])
Overall driving distraction categories include use of electronics (on phone, texting, reading emails), fatigue, and driving while impaired. Assuming these three categories are independent of one another, the percentage of driving distraction can be estimated as 18% for driving under the influence, 20% for distracted driving, and 62% including driving in tired or normal condition (because driving while tired does not necessarily cause accidents, this group was combined with the normal driving condition group.)
The AAA survey [67] also summarized how drivers behave when speeding:
Nearly half of all drivers (48 percent) report going 15 mph over the speed limit on a freeway in the past month, while 15 percent admit doing so fairly often or regularly.
About 45 percent of drivers report going 10 mph over the speed limit on a residential street in the past 30 days, and 11 percent admit doing so fairly often or regularly.
Based on data from The Washington Post [68] and Caring.com [69], 20% of drivers who are age 65 or above tend to drive under the speed limit. Table 15 summarizes these statistics and classifies these four categories into three groups with associated percentages: Group A: over speed limit, 19%; group B: under speed limit, 18.3%; and group C: within speed limit, 62.7%.
Table 15.
CategoryFrequencyGroupDistribution
Highway speeding48%, 15 mph above past month, 15% regularly.A0.48/12 + 15% = 19%
Residential speeding45%, 10 mph above past month, 11% regularly.A0.45/12 + 11% = 14.75%
Driving too slow [68, 69]About 1/6 to 20% above age 65B16.6% to 20%
Normal C 
Overall DistributionA: 19%, B: 18.3%, C: 62.7%
Table 15. Statistics about Speeding on Highway and in Residential Areas
According to ten-year averages from 2007 to 2016 analyzed by Booz Allen-Hamilton based on NHTSA data [5], on average, over 5,891,000 vehicle crashes occur each year. Approximately 21% of these crashes—nearly 1,235,000—are weather-related. Weather-related crashes are defined as those crashes that occur in adverse weather (i.e., rain, sleet, snow, fog, severe crosswinds, or blowing snow/sand/debris) or on slick pavement (i.e., wet pavement, snowy/slushy pavement, or icy pavement). The vast majority of weather-related crashes happen on wet pavement (70%) and during rainfall (46%). A much smaller percentage of weather-related crashes occur during winter conditions: 18% during snow or sleet, 13% on icy pavement, and 16% on snowy or slushy pavement. Only 3% happen in the presence of fog [5]. Table 16 summarize these statistics data about accidents caused by road/weather conditions.
Table 16.
ConditionsFrequencyGroupDistribution
Wet Pavement15%A20%
Rain10%A
Snow/Sleet4%A
Icy Payment3%A
Snow/Slushy Payment4%A
Fog1%B1%
Normal C79%
Overall DistributionA: 19%, B:1%, C:79%
Table 16. Statistics about Accidents Caused by Road/Weather Conditions
Based on the above historical statistics for Speeding, Distracted Driving, and Road Conditions, data sets were generated for modeling and evaluating the ADAMANN and ADAMSVM sensor fusion algorithms. Table 17 shows both algorithms performed well with accuracy ranging from 87% to 95%.
Table 17.
 125 sets250 sets500 sets1000 sets
ADAMANN0.907170.942590.931610.95248
ADAMSVM0.877450.914740.935760.94632
ADAMRF0.856000.880570.943490.95484
Table 17. Evaluation of ADAMANN and ADAMSVM using Historical Data

5.3 Stage II: Findings and Discussion

In Stage II, we designed and evaluated three machine learning models (ANN, SVM, and RF) for providing driving advice to drivers of autonomous vehicles using data generated from historical statistics and fitted distributions. For the models based on historical statistics, model accuracy ranged from 90% to 95% for ANN, 87% to 94% for SVM, and 85% to 95% for RF for the fitted distributions. For the models based on fitted distributions, model accuracy ranged from 85% to 89% for ANN, 80% to 92% for SVM, and 59% to 73% for RF. These results suggest (1) all three models perform better when using data generated from historical statistics than from fitted distributions. This may be due to the data in the fitted distribution having greater variation than historical statistics; (2) ANN and SVM models are a good fit for this application; (3) the ANN model seems most stable and adaptive, and gradually improves its accuracy as sample size increases; (4) the SVM model in one instance improved its accuracy faster than the other two models (Table 11, 250 data sets). Overall, the modeling methodology appears to be correct and yields good results. Future directions include (1) using a hybrid SVM and ANN modeling approach to improve model accuracy at different sample sizes; (2) development of a novel model based upon ANN topology for real-time data processing; and (3) development of a plug-in portable hardware system that incorporates sensory devices and machine learning algorithms to provide real-time personalized voice advice to a driver.

6 Assessment of Applicability of Hoff & Bashir's Framework Within the Context of Driving Automation

Although the focus of this research was on model development, we also examined the self-report and vehicle data for signs that Hoff & Bashir's trust framework [21] holds up in the context of driving the Tesla. Our expectations were as follows:
Dispositional Trust: Individual differences in trust disposition should predict driving behaviors and Autopilot use.
Learned Trust: Trust in the Autopilot will increase from the beginning of the study to the end of the study.
Situational Trust: Drivers will trust the Autopilot more in routine parts of the drive compared to non-routine parts of the drive.
For these assessments, we examined data from the six subjects who used the AutoPilot mode twice to see if Hoff & Bashir's concepts of dispositional trust, learned trust, and situational trust are borne out in the self-report and driving attribute data. We also compared driving data from the Manual and AutoPilot modes to confirm that an Autopilot effect exists. Due to the small number of data sets, this analysis is considered to be preliminary; but the exercise is helpful for identifying potentially relevant vehicle attributes for future studies of trust in automation.

6.1 Dispositional Trust

To assess the relationship of dispositional trust to driving behavior, we computed correlation coefficients between a subject's average TOAST score and each of the 24 driving attributes identified in Table 3. TOAST asks subjects to indicate their level of agreement (using a 7-point Likert scale) with nine statements about trust in automated systems. Table 18 shows strong correlations between average TOAST scores and the number of downhill braking events (DBn); the length of time brakes were applied during downhill braking (DBl); the standard deviation of distance to the left line when driving downhill (DLs); and average driving speed in a straight line condition (SSa). In other words, the data suggested that subjects who trusted automated systems more tended to rely on the Autopilot more, because:
When driving downhill, the number of braking events was lower (DBn) and the time spent braking (DBl) was less.
When driving downhill, the variation in the vehicle's distance from the left line (DLs) was relatively low.
When driving straight, the vehicle's average speed (SSa) was relatively high.
Table 18.
SubjectDriveTOASTDBnDBlDLsSSa
21191817Autopilot31.001.260.0139.98
21164616Autopilot5.1111110.000.000.0145.00
21199974Autopilot5.5555560.000.000.0144.99
21131964Autopilot4.2222221.001.240.0145.00
21160564Autopilot4.6666670.000.000.0145.00
21191823Autopilot5.6666670.000.000.0145.00
 Correlations −0.85018−0.85394−0.864690.83801
Table 18. Correlations Between Average TOAST Score and Selected Vehicle Attributes

6.2 Learned Trust

To assess if trust in the Autopilot increased over time, we compared the means of the Trust in Automation ratings and the 24 driving attributes from subjects’ first AutoPilot driving attempts with their second Autopilot (Driver's Preference) attempts (Table 19). The data suggest that learned trust occurred between the first and second drives, because:
The mean Trust in Automation rating increased.
There was increased reliance on Autopilot on the second drive.
When driving on a curve, the number of braking events (CBn) was lower and the time spent braking (CBl) was less (i.e., 0 seconds).
Driving speed increased when driving on a curve (CSa) and there was less variation in driving speed (CSs).
Even though speed increased, the average distance to the left line (CLa) and the variability in the distance (CLs) remained the same.
Table 19.
SubjectDriveTrustCBnCBlCSaCSsCLaCLs
21191817Autopilot5.750.000.0040.473.09−0.140.01
21164616Autopilot5.251.004.1442.474.88−0.150.01
21199974Autopilot41.000.2844.760.90−0.150.01
21131964Autopilot6.251.002.1542.244.66−0.140.01
21160564Autopilot71.003.0323.7110.16−0.170.02
21191823Autopilot6.750.000.0039.738.05−0.150.01
 Average5.8333330.671.6038.905.29−0.150.01
21191817Autopilot (DP)5.50.000.0042.273.72−0.150.02
21164616Autopilot (DP)6.750.000.0040.152.45−0.170.02
21199974Autopilot (DP)4.250.000.0044.990.13−0.150.01
21131964Autopilot (DP)6.250.000.0043.492.19−0.150.01
21160564Autopilot (DP)70.000.0044.990.13−0.150.01
21191823Autopilot (DP)70.000.0044.990.12−0.140.02
 Average6.1250.000.0043.481.46−0.150.01
Table 19. Comparisons of Trust and Driving on Curve Between 1st and 2nd Autopilot Attempts

6.3 Situational Trust

To assess if Trust in Automation varies depending on the driving situation, we compared vehicle attributes across various driving situations when driving in Autopilot mode. The data related to braking suggest that reliance on Autopilot decreases as driving situations increase in complexity (Table 20).
When driving in a straight line, the average number of braking events (SBn) and duration of braking (SBl) is zero.
The average number of braking events (DBn) and duration of braking (DBl) is higher when driving downhill than when driving in a straight line.
The average number of braking events (CBn) and duration of braking (CBl) is higher when driving on a curve than when driving downhill.
Table 20.
SubjectDriveDBnDBlSBnSBlCBnCBl
21191817Autopilot1.001.260.000.000.000.00
21164616Autopilot0.000.000.000.001.004.14
21199974Autopilot0.000.000.000.001.000.28
21131964Autopilot1.001.240.000.001.002.15
21160564Autopilot0.000.000.000.001.003.03
21191823Autopilot0.000.000.000.000.000.00
 Average0.3333330.4166670.000.000.671.60
21191817Autopilot (DP)1.002.850.000.000.000.00
21164616Autopilot (DP)0.000.000.000.000.000.00
21199974Autopilot (DP)0.000.000.000.000.000.00
21131964Autopilot (DP)1.000.760.000.000.000.00
21160564Autopilot (DP)1.001.610.000.000.000.00
21191823Autopilot (DP)1.000.950.000.000.000.00
 Average0.6666671.028333000.000.00
Table 20. Comparisons of Driving Attribute Values for Selected Driving Situations

6.4 Comparison of Autopilot and Manual Driving

To compare drivers’ performance in the Autopilot and Manual modes, average speed, speed variation, and average distance to the left line under different driving conditions (downhill, straight line, turns, curve) were examined (Table 21). The averages and standard deviations were computed for the subject pool. The results suggest that the variation of average speed and average distance to the left line (lane keeping) are smaller in Autopilot mode than in Manual mode. With average distance to the left line (lane keeping) as the numerator and average speed variation in a curve situation as the denominator, the magnitude of difference from ranges from 0.1 to 40 times larger. This suggests that the Autopilot is better than human drivers at lane keeping, but more data is needed to evaluate the statistical significance of the difference. The F-test of the subject pool driving behavior variation under all driving conditions when using Autopilot versus Manual mode indicated that the difference between autopilot and manual driving is significant. That is, overall variation is less in Autopilot mode than in manual mode. For the F-test of means, fmean was 0.92498 and the Fmean,0.05,11,11 critical one-tail value was 0.35. For the F-test of variance, fvariance was 13.99 and the Fvariance,0.05,11,11 critical one-tail was 2.82. Therefore, δmean, autopilot is less than δvariance, manual and δvariance, autopilot less than δvariance, manual. and the null hypothesis is rejected.
Table 21.
 DSaDSsDLaSSaSSsSLaTSaTSsTLaCSaCSsCLa
Autopilot43.911.39−0.158389139.9819680.04−0.1681030.1812.86−0.1793140.473.09−0.144417
Mean44.490.99−0.160000045.0000000.04−0.1855027.1820517.41423−0.1652838.581675.730337−0.150735
Variance0.410.070.00000250.0000230.000.000984.6549492.5354110.0012972.271312.551260.000073
Manual39.533.71−0.174397940.6120561.65−0.1887729.9413.02−0.1855840.353.92−0.152079
Mean43.801.55−0.170000047.2100001.43−0.1832227.9563717.2744−0.1828042.737813.624244−0.153506
Variance3.780.490.000100018.9500000.530.000307.4813624.3467940.000216.8659581.9007490.000085
Table 21. Comparison of Autopilot and Manual Driving Modes for Selected Driving Attributes

7 Conclusion and Future Directions

Fully autonomous driving is on the horizon; vehicles with advanced driver assistance systems (ADAS) such as Tesla's Autopilot are already available to consumers. Partially automated driving introduces new complexities to human interactions with cars and can even increase collision risk. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams. In this paper, we proposed an architecture for an adaptive autonomous driving advisor, developing two layers of multiple sensor fusion models to provide appropriate speech-based reminders to increase driving safety based on predicted driving status. We also performed preliminary validation of Hoff & Bashir's trust framework using real-life vehicle data, with some interesting findings about relevant vehicle attributes. Results suggest (1) human trust in automation can be quantified and predicted with 80% to 85% accuracy based on vehicle data; and (2) the developed driving assistance model can generate appropriate voice instructions for use by a driving assistance device with 90–95% accuracy. Future directions include (1) obtain more subject data to improve model accuracy and validate Hoff & Bashir's trust framework and/or develop a trust model for autonomous vehicle driving that incorporates dispositional, learned, and situational trust; (2) investigate and integrate telemetry sensors and develop a novel real-time machine learning model built upon ANN topology to improve model accuracy and real-time voice response; and (3) prototype and evaluate a portable plug-in driving assistance device that can provides personalized advice to drivers.

References

[1]
U.S. Department of Transportation National Highway Traffic Safety Administration, National Statistics, available at https://cdan.nhtsa.gov/tsftables/National%20Statistics.pdf. (last accessed on 7/22/21).
[2]
J. R. Treat. 1980. A study of precrash factors involved in traffic accidents. HSRI Research Review 10, 6 (1980), 35.
[3]
S. Singh. 2015. Critical reasons for crashes investigated in the national motor vehicle crash causation survey. (Traffic Safety Facts Crash•Stats. Report No. DOT HS 812 115). Washington, DC: National Highway Traffic Safety Administration. Available online at: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115. (last accessed, 7/22/21).
[4]
T. A. Dingus, F. Guo, S. Lee, J. F. Antin, M. Perez, M. Buchanan-King, and J. Hankey. 2016. Driver crash risk factors and prevalence evaluation using naturalistic driving data. Proceedings of the National Academy of Sciences 113, 10 (2016), 2636–2641.
[5]
U.S. Department of Transportation Federal Highway Administration, How do Weather Events Impact Roads? Available online at: https://ops.fhwa.dot.gov/weather/q1_roadimpact.htm. (last accessed, 7/22/21).
[6]
M. Galvani. 2019. History and future of driver assistance. IEEE Instrumentation & Measurement Magazine 22, 1 (2019), 11–16.
[7]
National Highway Traffic Safety Administration (NHTSA), 2021. Automated Vehicles for Safety. Available online at: https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety. (last accessed on 7/18/21).
[8]
C. Badue, R. Guidolini, R. V. Carneiro, P. Azevedo, V. B. Cardoso, A. Forechi, L. Jesus, R. Berriel, T. M. Paixão, F. Mutz, and L. de Paula Veronese. 2020. Self-driving cars: A survey. Expert Systems with Applications, 113816.
[9]
D. Yi, J. Su, L. Hu, C. Liu, M. Quddus, M. Dianati, and W. H. Chen. 2019. Implicit personalization in driving assistance: State-of-the-art and open issues. IEEE Transactions on Intelligent Vehicles 5, 3 (2019), 397–413.
[10]
A. Swief and M. El-Habrouk. 2018. A survey of automotive driving assistance systems technologies. In 2018 International Conference on Artificial Intelligence and Data Processing (IDAP). IEEE. 1–12.
[11]
X. Li, K. Y. Lin, M. Meng, X. Li, L. Li, and Y. Hong. 2021. Composition and application of current advanced driving assistance system: A review. arXiv preprint arXiv:2105.12348.
[12]
A. Ziebinski, R. Cupek, D. Grzechca, and L. Chruszczyk. 2017. Review of advanced driver assistance systems (ADAS). In AIP Conference Proceedings. AIP Publishing LLC. 1906, 1 (2017), 120002.
[13]
E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda. 2020. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access 8 (2020), 58443–58469.
[14]
E. Marti, M. A. de Miguel, F. Garcia, and J. Perez. 2019. A review of sensor technologies for perception in automated driving. IEEE Intelligent Transportation Systems Magazine 11, 4 (2019), 94–108.
[15]
A. Moujahid, M. E. Tantaoui, M. D. Hina, A. Soukane, A. Ortalda, A. El Khadimi, and A. Ramdane-Cherif. 2018. Machine learning techniques in ADAS: A review. In 2018 International Conference on Advances in Computing and Communication Engineering (ICACCE). IEEE. 235–242.
[16]
R. Parasuraman, and V. Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors 39, 2 (1997), 230–253.
[17]
R. Parasuraman, T. B. Sheridan, and C. D. Wickens. 2000. A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 30, 3 (2000), 286–297.
[18]
L. Onnasch, C. D. Wickens, H. Li, and D. Manzey. 2014. Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors 56, 3 (2014), 476–488.
[19]
A. Madison, A. Arestides, S. Harold, T. Gurchiek, K. Chang, A. Ries, N. Tenhundfeld, E. Phillips, E. de Visser, and C. Tossell. 2021. The design and integration of a comprehensive measurement system to assess trust in automated driving. In 2021 Systems and Information Engineering Design Symposium (SIEDS). IEEE. 1–6, DOI:
[20]
J. D. Lee and K. A. See. 2004. Trust in automation: Designing for appropriate reliance. Human Factors 46, 1 (2004), 50–80.
[21]
K. A. Hoff and M. Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors 57, 3 (2015), 407–434.
[22]
S. C. Kohn, E. de Visser, E. Wiese, Y. C. Lee, and T. H. Shaw. 2021. Measurement of trust in automation: A narrative review & reference guide. Frontiers in Psychology, 12.
[23]
J. D. Lee, S. Y. Liu, J. Domeyer, and A. DinparastDjadid. 2021. Assessing drivers’ trust of automated vehicle driving styles with a two-part mixed model of intervention tendency and magnitude. Human Factors 63, 2 (2021), 197–209.
[24]
N. L. Tenhundfeld, E. J. de Visser, A. J. Ries, V. S. Finomore, and C. C. Tossell. 2020. Trust and distrust of automated parking in a Tesla Model X. Human Factors 62, 2 (2020), 194–210.
[25]
N. L. Tenhundfeld, E. J. de Visser, K. S. Haring, A. J. Ries, V. S. Finomore, and C. C. Tossell. 2019. Calibrating trust in automation through familiarity with the autoparking feature of a Tesla Model X. Journal of Cognitive Engineering and Decision Making 13, 4 (2019), 279–294.
[26]
K. E. Schaefer, J. Y. Chen, J. L. Szalma, and P. A. Hancock. 2016. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors 58, 3 (2016), 377–400.
[27]
E. J. de Visser, M. M. Peeters, M. F. Jung, S. Kohn, T. H. Shaw, R. Pak, and M. A. Neerincx. 2020. Towards a theory of longitudinal trust calibration in human–robot teams. International Journal of Social Robotics 12, 2 (2020), 459–478.
[28]
J. Lee and N. Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
[29]
J. D. Lee and N. Moray. 1994. Trust, self-confidence, and operators' adaptation to automation. International Journal of Human-Computer Studies 40, 1 (1994), 153–184.
[30]
A. Freedy, E. de Visser, G. Weltman, and N. Coeyman. 2007. Measurement of trust in human-robot collaboration. In 2007 International Symposium on Collaborative Technologies and Systems. IEEE. 106–114.
[31]
E. de Visser and R. Parasuraman. 2011. Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making 5, 2 (2011), 209–231.
[32]
E. J. de Visser, S. S. Monfort, R. McKendrick, M. A. Smith, P. E. McKnight, F. Krueger, and R. Parasuraman. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied 22, 3 (2016), 331.
[33]
P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. De Visser, and R. Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors 53, 5 (2011), 517–527.
[34]
H. Wojton, S. Lane, and D. Porter. 2020. Initial validation of the trust of automated systems test (TOAST). The Journal of Social Psychology. 1–16.
[35]
R. C. Mayer, J. H. Davis, and F. D. Schoorman. 1995. An integrative model of organizational trust. Academy of Management Review 20, 3 (1995), 709–734.
[36]
B. C. Kok and H. Soh. 2020. Trust in robots: Challenges and opportunities. Current Robotics Reports. 1–13.
[37]
P. A. Hancock, T. T. Kessler, A. D. Kaplan, J. C. Brill, and J. L. Szalma. 2021. Evolving trust in robots: Specification through sequential and comparative meta-analyses. Human Factors 63, 7 (2021), 1196–1229.
[38]
D. Ullman and B. F. Malle. 2019. Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE. 618–619.
[39]
B. F. Malle and D. Ullman. 2021. A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction. Academic Press. 3–25.
[40]
K. Akash, W. L. Hu, N. Jain, and T. Reid. 2018. A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems (TiiS) 8, 4 (2018), 1–20.
[41]
K. Akash, N. Jain, and T. Misu. 2020. Toward adaptive trust calibration for level 2 driving automation. In Proceedings of the 2020 International Conference on Multimodal Interaction. 538–547.
[42]
X. Fu, Y. Zang, and H. Liu. 2012. A real-time video-based eye tracking approach for driver attention study. Computing and Informatics 31 (2012), 805–825.
[43]
Y. Xing, C. Lv, H. Wang, D. Cao, and E. Velenis. 2020. An ensemble deep learning approach for driver lane change intention inference. Transportation Research Part C: Emerging Technologies 115, 102615.
[44]
N. Merat, B. Seppelt, T. Louw, J. Engström, J. D. Lee, E. Johansson, et al. 2019. The “out-of-the-loop” concept in automated driving: Proposed definition, measures and implications. Cognition, Technology & Work 21, 1 (2019), 87--98.
[45]
M. R. Endsley. 2018. Situation awareness in future autonomous vehicles: Beware of the unexpected. In Congress of the International Ergonomics Association. Springer, Cham. 303–309.
[46]
J. C. De Winter, R. Happee, M. H. Martens, and N. A. Stanton. 2014. Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transportation Research part F: Traffic Psychology and Behaviour 27 (2014), 196–217.
[47]
V. A. Banks, A. Eriksson, J. O'Donoghue, and N. A. Stanton. 2018. Is partially automated driving a bad idea? Observations from an on-road study. Applied Ergonomics 68 (2018), 138–145.
[48]
D. Ullman and B. F. Malle. 2019. MDMT: Multi-dimensional measure of trust. Available online at: https://research.clps.brown.edu/SocCogSci/Measures/MDMT_v1.pdf. (last accessed, 2/22/22).
[49]
U. R. Acharya, E. Y. K. Ng, J. H. Tan, and S. V. Sree. 2012. Thermography based breast cancer detection using texture features and support vector machine. Journal of Medical Systems 36, 3 (2012), 1503–1510.
[50]
A. Statnikov, L. Wang, and C. F. Aliferis. 2008. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC Bioinformatics 9, 1 (2008), 1–10.
[51]
A. L. Boulesteix, S. Janitza, J. Kruppa, and I. R. König. 2012. Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2, 6 (2012), 493–507.
[52]
R. Caruana, N. Karampatziakis, and A. Yessenalina. 2008. An empirical evaluation of supervised learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning. 96–103.
[53]
S. J. Hsieh. 2004. Artificial neural networks and statistical modeling for electronic stress prediction using thermal profiling. IEEE Transactions on Electronics Packaging Manufacturing 27, 1 (2004), 49–58.
[54]
S. J. Hsieh, R. Crane, and S. Sathish. 2005. Understanding and predicting electronic vibration stress using ultrasound excitation, thermal profiling, and neural network modeling. Nondestructive Testing and Evaluation 20, 2 (2005), 89–102.
[55]
U. V. B. R. Thissen, R. Van Brakel, A. P. De Weijer, W. J. Melssen, and L. M. C. Buydens. 2003. Using support vector machines for time series prediction. Chemometrics and Intelligent Laboratory Systems 69, 1–2 (2003), 35–49.
[56]
F. Chauchard, R. Cogdill, S. Roussel, J. M. Roger, and V. Bellon-Maurel. 2004. Application of LS-SVM to non-linear phenomena in NIR spectroscopy: Development of a robust and portable sensor for acidity prediction in grapes. Chemometrics and Intelligent Laboratory Systems 71, 2 (2004), 141–150.
[57]
C. J. Burges. 1998. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2, 2 (1998), 121–167.
[58]
V. N. Vapnik. 1999. An overview of statistical learning theory. IEEE Transactions on Neural Networks 10, 5 (1999), 988–999.
[59]
R. Burbidge and B. Buxton. 2001. An introduction to support vector machines for data mining. Keynote Papers, Young OR12, 3–15.
[60]
K. R. Al-Balushi and B. Samanta. 2002. Gear fault diagnosis using energy-based features of acoustic emission signals. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 216, 3 (2002), 249–263.
[61]
K. Z. Mao. 2004. Feature subset selection for support vector machines through discriminative function pruning analysis. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 34, 1 (2004), 60–67.
[62]
L. J. Cao and F. E. H. Tay. 2003. Support vector machine with adaptive parameters in financial time series forecasting. IEEE Transactions on Neural Networks 14, 6 (2003), 1506–1518.
[63]
M. A. Mohandes, T. O. Halawani, S. Rehman, and A. A. Hussain. 2004. Support vector machines for wind speed prediction. Renewable Energy 29, 6 (2004), 939–947.
[64]
C. L. Huang, M. C. Chen, and C. J. Wang. 2007. Credit scoring with a data mining approach based on support vector machines. Expert Systems with Applications 33, 4 (2007), 847–856.
[65]
A. M. Younus, A. Widodo, and B. S. Yang. 2010. Evaluation of thermography image data for machine fault diagnosis. Nondestructive Testing and Evaluation 25, 3 (2010), 231–247.
[66]
M. Dikmen and C. Burns. 2017. Trust in autonomous vehicles: The case of Tesla Autopilot and Summon. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC’17), 1093–1098. DOI:
[67]
A. Gross. 2016. 87 percent of drivers engage in unsafe behaviors while behind the wheel. AAA Newsroom, 2/5/2016. Available at: https://newsroom.aaa.com/2016/02/87-percent-of-drivers-engage-in-unsafe-behaviors-while-behind-the-wheel/. last accessed on 7/22/21.
[68]
R. Read. 2016. Nearly 20 percent of U.S. drivers are over 65: Are America's roads ready for them? The Washington Post, 2016. Available online at, https://www.washingtonpost.com/cars/nearly-20-percent-of-us-drivers-are-over-65-are-americas-roads-ready-for-them/2016/11/07/1b70f292-a51b-11e6-ba46-53db57f0e351_story.html. Last accessed: 7/22/21.
[69]
Caring.com. Seniors and Driving: A Guide. Available online at: https://www.caring.com/caregivers/senior-driving/. Last accessed: 7/22/21.

Cited By

View all
  • (2024)Multi-Stage Corn-to-Syrup Process Monitoring and Yield Prediction Using Machine Learning and Statistical MethodsSensors10.3390/s2419640124:19(6401)Online publication date: 2-Oct-2024
  • (2024)Investigating Lane Departure Warning Utility with Survival Analysis Considering Driver CharacteristicsApplied Sciences10.3390/app1420931714:20(9317)Online publication date: 12-Oct-2024
  • (2024)IDAS: Intelligent Driving Assistance System Using RAGIEEE Open Journal of Vehicular Technology10.1109/OJVT.2024.34474495(1139-1165)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 12, Issue 3
September 2022
257 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3543991
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 July 2022
Online AM: 04 July 2022
Accepted: 01 May 2022
Revised: 01 May 2022
Received: 01 September 2021
Published in TIIS Volume 12, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Trust in automation
  2. advanced driver assistance systems (ADAS)
  3. adaptive advice
  4. vehicle telemetry
  5. autonomous driving assistance

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • Air Force Research Laboratory
  • U.S. Air Force Academy through the Air Force Office of Scientific Research Summer Faculty Fellowship Program®

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,262
  • Downloads (Last 6 weeks)155
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Multi-Stage Corn-to-Syrup Process Monitoring and Yield Prediction Using Machine Learning and Statistical MethodsSensors10.3390/s2419640124:19(6401)Online publication date: 2-Oct-2024
  • (2024)Investigating Lane Departure Warning Utility with Survival Analysis Considering Driver CharacteristicsApplied Sciences10.3390/app1420931714:20(9317)Online publication date: 12-Oct-2024
  • (2024)IDAS: Intelligent Driving Assistance System Using RAGIEEE Open Journal of Vehicular Technology10.1109/OJVT.2024.34474495(1139-1165)Online publication date: 2024
  • (2024)AI Revolutionizing Industries Worldwide: A Comprehensive Overview of Its Diverse ApplicationsHybrid Advances10.1016/j.hybadv.2024.100277(100277)Online publication date: Aug-2024
  • (2024)Truck drivers’ views on the road safety benefits of advanced driver assistance systems and Intelligent Transport Systems in TanzaniaJournal on Multimodal User Interfaces10.1007/s12193-024-00437-wOnline publication date: 3-Aug-2024
  • (2024)Leveraging Context-Aware Emotion and Fatigue Recognition Through Large Language Models for Enhanced Advanced Driver Assistance Systems (ADAS)Recent Advances in Machine Learning Techniques and Sensor Applications for Human Emotion, Activity Recognition and Support10.1007/978-3-031-71821-2_2(49-85)Online publication date: 8-Nov-2024
  • (2022)IntroductionIntelligent Computing and Communication for the Internet of Vehicles10.1007/978-3-031-22860-5_1(1-7)Online publication date: 15-Nov-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media