Abstract
The acceptance and hence the spread of automated and connected driving (ACD) systems is largely determined by the degree of subjective un-/certainty that users feel when interacting with automated vehicles. User acceptance is negatively influenced in particular by feelings of uncertainty when interacting with automated vehicles. The AutoAkzept project (which full title translates to: Automation without uncertainty to increase the acceptance of automated and connected driving) develops solutions of user-focused automation that place the vehicle occupants at the center of system development and thus reduce their uncertainty. Systems with user-focused automation use various sensors to detect uncertainty and its contributing factors (e.g. stress, kinetosis, and activity) in real time, integrate this information with context data and derive the current needs of the vehicle occupants. For this purpose, the project AutoAkzept develops an integrated architecture for context-sensitive user modelling, derivation of user demands and adaptation of system functions (e.g. human-machine-interaction, interior, driving styles). The architecture is implemented using machine learning methods to develop real-time algorithms that map situational contexts, user states and adaptation requirements. The overall objective of AutoAkzept is the development of promising adaptation strategies to improve the user experience based on the identified uncertainty related needs. By reducing or preventing subjective uncertainties, the developments of the project thus ensure a positive, comfortable user experience and contribute to increasing the acceptance of ACD.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction: Automation without Uncertainty
The innovations of automated and connected driving (ACD) meet numerous social challenges in a fundamentally new way. ACD-supported, demand-oriented mobility services are expected to substantially reduce CO2 emissions through more efficient use of traffic infra-/structures and systems, while reducing the burden on roads and parking spaces in cities and increasing traffic safety. The takeover of transport-related activities by ACD systems promises a gain in comfort and usable time for their users. Last but not least, ACD promises an increase in mobility and freedom of travel for people who are not able to drive a vehicle. However, ACD can only keep these promises if the associated technologies and technical systems achieve a high degree of dissemination in the foreseeable future. A main prerequisite for this is the acceptance of users and those affected by future ACD systems [1]. This is largely determined by the degree of trust and subjective safety that users and those affected, such as pedestrians, cyclists and drivers of traditional vehicles, feel when interacting with automated vehicles [2,3,4]. The AutoAkzept project (which full title translates to: Automation without uncertainty to increase the acceptance of automated and connected driving) is therefore working on basics and solutions for automation without uncertainty, which serve to ensure a high level of acceptance of ACD and contribute to the success of this new technology. The project focuses on the users of future ACD systems.
The automation of driving is changing the role of humans. Fully automated vehicle functions will take over all control and monitoring tasks for specific applications performed by humans in conventional motor vehicles. However, a lack of control can lead to uncertainty [5] and a lack of trust [6] among users of fully automated vehicles. But the promised benefits of relief and time for other activities will only be realized for the users of these systems if the use of these systems is not associated with subjective uncertainty and lack of trust [7]. A lack of knowledge about these new systems can, for example, cause subjective uncertainty among users with regard to their use. Research shows that depending on the speed and maneuvering of vehicles users may experience uncertainty in understanding, predicting and evaluating the vehicle behavior or the traffic situation [8]. Last but not least, the performance of non-driving activities, e.g. working in a mobile office, can create uncertainty as to whether kinetosis occurs or whether the time taken to reach a goal or reach a system limit is sufficient to complete the current task. The experience of such subjective uncertainties in dealing with ACD reduces the certainty and confidence of the users and decreases their acceptance. Hence, direct experience with automated vehicles must support the formation of trust by minimizing the occurrence of subjective uncertainties. For this it is important that central needs of the users are taken into account. Recent studies [9, 10] refer to the relevance of considering the information needs of users and traffic interaction partners of automated vehicles. Meeting these needs lays the foundation for ensuring that users of automated vehicles will not experience uncertainty [11]. AutoAkzept therefore focuses on the needs of users of ACD vehicles and develops solutions to reduce subjective uncertainties on the basis of user-focused systems.
2 User-Focused Automation
Traditional approaches to designing of automated systems neglect basic human needs and create systems that appear or are actually intransparent from the perspective of their users. Due to this lack of transparency, people interacting with such systems cannot understand the reasons for the behavior of the automation and cannot predict their next actions. In addition, this design approach, which is disadvantageous for humans, often requires users to adapt to the machine type of communication when interacting with technical systems. This is an aspect that has been criticized by the German Ethics Commission Automated and Connected Driving [12]. Systems designed in this way carry the risk that people experience subjective uncertainties when using them, combined with the negative consequences for their acceptance and intended use. In contrast, AutoAkzept follows an approach of user-focused automation. This approach places two basic human needs at the center of system design: the need to understand [13] and the need to be understood (e.g. [14, 15]). The need to understand, which is closely related to information needs (e.g. [9]), is crucial for successful, goal-oriented interaction with the environment and with any artifact or system. It forms the basis for the acquisition and application of knowledge that gives meaning to things and aspects of the world, and enables understanding and predictability. To address this need, the design of automated systems must ensure that technologies and technical systems not only do what they promise, but also what their users imagine them to do. Automated systems that are to be used and accepted by people must behave in a predictable manner, and in such a way that people understand them without ever having used them before. The implementation of this requirement ensures that systems are transparent to their users, so that they can easily deduce the functions and modes of operation of the system and understand how the system works with the least amount of effort. The need to be understood, on the other hand, is essential to build a relationship, to feel comfortable, seen and respected. Satisfying this need lays the foundation for more sympathy, trust, the reduction of negative influences (e.g. stress) and the experience of positive emotions. Automated vehicles should therefore know whether their users are uncertain, stressed or nervous and react accordingly. They must be able to recognize when it is appropriate to provide information and when not. To this end, these systems must focus on the human being and be able to take into account the different nature of different human states, resulting needs and resulting intentions. Figure 1 schematically contrasts this user-focused approach to automation design with the conventional approach.
3 Architectural Approach: Components and Functionalities
AutoAkzept focuses on the user state uncertainty and associated states such as anxiety, discomfort or stress. For the modelling of these user states there are several possibilities. Essentially, a distinction can be made between a data-driven and a model-based approach. With the data-driven approach, a classifier (trained classification algorithm as a result of machine learning) for the relevant user state (“uncertainty”) is created directly from the data of the multimodal sensory and further data sources (see Fig. 2). In that case a mapping of individual characteristics in the form of a user description within the user model would not be necessary. With the model-based approach, however, classification is carried out on hierarchically separate levels with the aim of modelling the relevant user state by discrete, individually classified features or factors. In this case, a user description is explicitly modelled in the form of characteristics of the user, such as the characteristic arousal (“is aroused”). It is therefore necessary to clarify whether the modeling of the user state should be data-based or model-based (hierarchical). Such a model-based, hierarchical approach to assessing the user state and deriving user needs has decisive advantages for the architecture of the situation model. The architecture thereby achieves:
Scenario Agnosticism:
The concepts and developed solutions for the architecture can be transferred to further scenarios in which uncertainties (or comparable user states) have to be detected. The assessment of the user state in the situation model can thus be based on the evidence of the user descriptions and does not have to be learned anew in each scenario as with a data-driven approach.
Sensor Agnosticism:
The user state derivation in the situation model, if based on user descriptions, is not dependent on individual sensors. For example, a user description such as the level of arousal could be derived in different ways (real-time heart rate monitoring from facial RGB color video versus electrocardiogramm-(ECG)-based heart rate recording) and thus be provided as a user description. Thus, the uncertainty determination on a higher level does not have to be trained completely new on the data set with modified sensors, but continues to work on the basis of the user descriptions. With the model-based approach, only the specific user description, e.g. “high arousal”, has to be retrained.
User State Agnosticism:
User descriptions can potentially be indicators for several user states, so that the recognition of other relevant user states (e.g. frustration, confusion etc.) could also be based on the same user descriptions, thus facilitating an extension to other user states. Likewise, if there is insufficient evidence for a particular user state, systemic interventions or adaptions can be selected on the basis of user descriptions, because relevant knowledge is semantically available (this is particularly important, since emotions are constructs based on several components that can be covered by the user descriptions).
Traceability and Explanability:
User state assessment based on user descriptions allows system decisions to be traced and explained. In general, this is an important aspect for the acceptance of technical (e.g. artificial intelligence based) decision systems, because gaining understanding is an important human need and a prerequisite for gaining trust. In addition, traceability and explanability are also useful for evaluating the system with regard to ethical issues, certification and management decisions and sales purposes.
For the implementation of a user-focused approach to system design, the project AutoAkzept developed the concept for a hierarchical, model-based functional architecture for the context-sensitive assessment of user states, the derivation of current user needs and the selection of systemic interventions or adaptions. Figure 2 shows this architectural concept. Seven components can be distinguished: (1) multimodal sensory input & data sources, (2) the user model, (3) the context model, (4) the integrated situation model, (5) the user profile and (6) the recommender for strategy selection and (7) the strategy catalogue. The central tasks and functions of each component of the architecture are described below with reference to different use cases that are considered in the project.
3.1 Sensors
User-focused automation requires the consideration and integration of information about the user as well as the systemic and situational context. For this, a multitude of sensory and non-sensory sources provide multimodal data (see Fig. 3). Data from the user can for example stem from cameras recording the users’ faces and bodies, physiological sensors such as an ECG or eye tracking devices to name a few. For the situational context, among others LIDAR or RADAR sensors as well as cameras can provide valuable information. To add, data from services regarding location information (e.g. global positioning system, GPS), the current weather as well as calendar entries or inputs and status of the infotainment system may be integrated to enrich user and context modeling. The general idea here is that the sensors provide the raw data and processing of the data to derive information is accomplished in the user and context model, respectively. However, the definition of what “raw data” is heavily depends on the approach utilized during modeling as well as the hardware and software tools used to record the sensor. For instance, some ECG manufacturers provide software that automatically extracts heart rate information, so that this processing step does not need to be shifted to the user model.
3.2 User Model
The purpose of the user model is to integrate the different user-related sensor data in order to derive higher order information about users and make this available to the integrated situation model. The user model consists of two modules for information processing, user description and user activity (see Fig. 4). The user description (UD) module has the function of deriving meaningful information units, the so-called primitives, from the sensor data. The primitives are to be regarded as the smallest meaningful units of user description and can be used to describe the current posture, movement, arousal or facial expression of the user. The use of primitives instead of directly processing the raw values has the advantage that an interpretability and thus transparency of the user state recognition (which is done in the integrated situation model) is facilitated. The user descriptions are on the one hand directly passed to the integrated situation model for the purpose of user state estimation and on the other hand further used within the user model for the estimation of user activity (UA) in the second module. The determination of user activity is useful because it can, together with the context model, provide further useful information for the classification of the user state and derivation of the user need within the integrated situation model. Consequently, the information about user activity is also passed on to the integrated situation model.
In order to realize the described functionalities in the user model, raw data from the sensors are fed into the user description module. Since the nature of the raw data depends on the sensor, initially a preprocessing has to be accomplished. In this step, certain parameters are extracted from selected data streams (e.g. body model points or facial action muscle activities from video data), while other raw data streams can be used directly to determine the user primitives in the next steps. For instance, postural primitives, such as the position of the left hand, can be determined from the position of the body model points and their distance to relevant objects (e.g. a keyboard for mobile office work or the steering wheel). For movement primitives the (joint) change of postural primitives over time may be relevant, while facial expression primitives may be derived from combinations of facial muscle activations (e.g. extracted from videos of the face, like in [16]). To determine arousal primitives, combined parameters from peripheral physiological data are used and their deviation from a baseline or variability over time is calculated. The primitives are then passed on to the situation model to be available as input for the estimation of the user state and to the user activity module within the user model. In this module the current activity of the user (e.g. mobile office work, driving manually, relaxing, or reading a book) is derived primarily on the basis of the posture and movement primitives. Like the user description primitives, activities are made available to the integrated situation model, in which the current user state is estimated. Both, user primitives and user activities are mostly determined based on machine learning models that were trained on large sets of training data. However, if sufficient amounts of data are unavailable, it may be suitable to define the algorithms for determining user primitives and activities based on expert knowledge. Taken together, the output of the user model can be imagined as a list of primitives and activities together with probabilities for their current occurrence.
3.3 Context Model
Driving always takes place in contexts, which are determined by many factors, and a plethora of context parameters would be necessary to describe every possible traffic situation. Therefore, it is necessary to reduce the number of parameters for a given traffic situation. The context model acts as a context-dependent data distributor in the AutoAkzept architecture (see Fig. 5).
It requests contextual information that is required by the integrated situation model from various data sources. This includes information about the vehicle state and behavior (speed, acceleration, etc.), the surrounding traffic (distances, velocities, etc.) and information about the general traffic situation (road type, traffic volume, routing, etc.). The context model consists of two components, which operate on different levels of abstraction. The macroscopic context model classifies the current situation into abstract categories using traffic- and GPS data. The classification system used in [17] serves as a basis for the classification system used in the context model and has to be extended to include information about the traffic volume. Depending on these abstract categories, the microscopic context model requests the relevant parameters that are necessary for the integrated situation model to infer the user state of interest (Sect. 3.4). Besides a set of basic input parameters that are required in every context, like the velocity of the ego-vehicle, the integrated situation model depends on other input parameters like the above-mentioned parameters about surrounding traffic or routing information. Most of the parameters that the microscopic context model may request are available from the vehicles sensors. This includes driving dynamic parameters in particular. Furthermore, cameras or LIDAR sensors can provide information about the surrounding traffic participants and environment. Car2X communication interfaces could also provide information about other vehicles or traffic infrastructure, like traffic light cycles. At last, application-programming interfaces can be used to access information provided by web services, like routing services or the user’s calendar. The requested data will be then send to the integrated situation model for the user state assessment (see Sect. 3.4) and the user profile (see Sect. 3.5) to be available for the modelling of user preferences.
3.4 Integrated Situation Model
Because user states can often only be interpreted meaningfully within a certain context [20], the goal of the integrated situation model is to bring together input from the user model and the context model to derive 1) the user’s state (US), 2) its most likely cause (MLC) and 3) his or her need (N). In other words, the integrated situation model completes three subsequent tasks with three different outputs (see Fig. 6).
The first task (knowing whether the user is uncertain when he shows signs of uncertainty) can be solved by training a classifier that operates on user activities, user descriptions and context values to output the US (with values uncertain/certain).
The second task (deriving the most likely cause for uncertainty) can be solved with a Bayesian network. The Bayesian network takes as input the US and outputs the MLC for this uncertainty. For each of the causes, the Bayesian network computes the post-intervention probability which denotes how likely it is that the user is uncertain because of it (causally). For instance, the post intervention probability denoted by ℙ(uncertain | do(Confusing Scene = 0)) expresses the effect of an intervention strategy that makes the scene more transparent. The most likely among all possible causes can then be calculated with e.g. the max()-operator. A Bayesian network is a directed acyclic graph whose nodes denote random variables and whose edges denote direct causal dependencies between the variables [21]. The structure of the network can be learned from observational data or defined by expert knowledge. To train the network, data from test persons experiencing uncertainty in several situations are fed into the algorithms to estimate the conditional probabilities of each node conditioned by its parent nodes. After training, the network can evaluate unseen data to detect the most likely cause of previously detected uncertainty.
The third task is to derive the user’s need explicitly using e.g. a simple look-up table specified by expert knowledge from the Post-intervention probability to a more verbose description of the need. This explicit need is important for human judges of the integrated situation model to be able to evaluate and understand whether the situation model draws the right inferences. Other than that, it exists as an epiphenomenon, which is not used further down-stream in the architecture.
3.5 User Profile
To keep track of and account for individual user preferences the AutoAkzept system creates an individual user profile for every vehicle user [18, 19]. The user profile tracks and saves individual user preferences with respect to the system’s behavior and settings. The recorded preferences comprise parameters related to the vehicle’s driving style, the HMI or routing preferences. The user profile component consists of three sub-components: data storage, inference engine and graphical user interface (GUI) (see Fig. 7). The data storage component contains a priori user characteristics, like age or experience with automated vehicles. Furthermore, it contains a history of all the drives a user has experienced. For each drive the user description (UD), the user activity (UA), context information (C) and the currently used interventions or adaptation strategies (AS) (Sects. 3.2, 3.3 and 3.7) are saved in the history. Lastly, the data storage stores the current user preferences that can be provided to the recommender (Sect. 3.6). The inference engine uses the data about previous driving maneuvers to model the current user preferences and updates these models after each driving maneuver. After querying the data from the history it models the user preferences as measures of central tendency of the probability distribution that the user was in a certain user state given UD, UA, C and AS (P(US | UA, UD, C, AS)). The current user preference with respect to a user state can for example be represented as the mean and variance of the probability distribution. The more data the inference engine has access to, the more precise the user preferences will be. The user profile inherently represents a feedback loop with respect to the system’s adaptation strategies since the inference engine takes the user description and user activity into account. Let’s assume that the system has detected the user’s uncertainty in a given maneuver and adapted its driving style afterwards. In the upcoming occurrences of this maneuver, the user’s uncertainty will be lower due to the newly applied driving style. The inference engine compares the mean uncertainty for the two driving styles and will conclude that the user prefers the adapted driving style since the mean uncertainty during the maneuvers with this driving style was lower (E[P(US = “uncertain” | UA, UD, C, AS = “defensive”)] < E[P(US = “uncertain” | UA, UD, C, AS = “normal”)]). The last sub-component, the GUI, gives users the opportunity to change certain settings of the system manually. This allows the user to correct user preferences that may have been inferred incorrectly by the inference engine or to fine tune the system’s settings. Beside others, the parameters that may be changed by the user may include preferences regarding routing or single driving style parameters like the car’s speed [18]. In the example above, the system chose to switch to a more defensive driving style to reduce the user’s uncertainty. The user’s uncertainty will most likely remain low for this driving style but he may prefer to switch to a less defensive driving style over time since he or she gained more confidence in the automated vehicles abilities. The GUI allows him or her to change the driving style again.
3.6 Recommender
The recommender system [22] is a trained machine learning algorithm (e.g. a random forest or a neural network) that decides which adaptation strategy S is most suited for the typical user in a given situation to reduce uncertainty, thus improving their overall user experience and increasing their acceptance.
The input to the recommender system consists of the user profile’s output P, i.e. what a specific user has preferred in this or similar situations in the past, the output of the integrated situation model’s output MLC, i.e. the inference whether uncertainty is present and what its most likely cause is, and context signals C (see Fig. 8).
The mapping between input data and the most suitable adaptation (output) are initially specified based on results from user interviews with users having just experienced short real-world rides in an automated vehicle and studies in a driving simulator. Importantly, in these studies, no adaptations were offered but users were asked about their experience and what would help them to feel safer. Based on these results, experts initially label the input training data. In the next step, the algorithm initially trained on expert labels can be evaluated in a real-world user-study where participants not only experience automated driving but also experience the adaptations suggested by the recommender system. Given the evaluation data, the recommender system can be re-trained if necessary. This two-step approach allows to really iteratively considering the user perspective in the real world situations.
The adaptation chosen to be most suitable to improve user experience is then checked for plausibility and safety before it is transferred to execution node, e.g. the car controller. This Safety-Check is needed to prevent the execution of dangerous adaptations, e.g. reducing the driving speed during platooning vehicles. For instance, when it is detected that the user is uncertain whether or not he can finish a current task during mobile office work due to an upcoming automation boundary, a possible adaptation could be “choose longer route over highway” to allow the user to spend more time on his task.
3.7 Intervention Catalogue
Based on the output of the recommender, the most helpful intervention strategy to mitigate the user’s uncertainty in the current moment can be chosen from an intervention catalogue (see Fig. 2). In general, this catalogue contains three different kinds of intervention: Adaptions of the HMI, the driving style, and vehicle’s interior. Let’s again consider the case when the user is working in the mobile office and needs to urgently finish some documents for a meeting at the destination. He becomes uncertain whether or not he will be able to do so, because the system boundary (e.g. change from highway to rural road) of his level-4 automated vehicle is approaching. In this case, the system could change the driving style of the vehicle by selecting a route that allows longer automated driving, but still guarantees an arrival in time. In addition, an adaptation of the interior lighting to optimize the conditions for office work could help the user, for instance by increasing the amount of activating blue light in the spectrum and providing focus light for better concentration. Then, the HMI could inform about the selected interventions and the designated arrival time at the destination. The association of which strategy is helpful in which situation is learned by the recommender and is initially based on the results of user studies. With increasing usage of the system, the system may learn in the user profile which strategies are favored by a specific user or user group and thus adjust the selection. In principle, the intervention catalogue is open to add newly developed strategies. In this case however, the recommender system needs to be re-trained in order to be able to choose the new strategy.
4 From Concept to Use Case: The Interplay of Components
The developments in AutoAkzept aim at the detection and reduction of subjective uncertainties of future users of ACD. The goal is to take into account the basic user needs, need to understand and need to be understood, which are essential for building trust that should arise from the experience in using AVF systems. How do the described components of user-focused automation in AutoAkzept interact to take these needs into account and to reduce subjective uncertainties of users? For illustration purposes, a prototypical sequence for one scenario addressed by AutoAkzept will be described:
A user of an automated vehicle is uncertain during the journey whether the vehicle is capable of driving through certain traffic situations safely. The associated need to understand is not sufficiently satisfied. The user’s uncertainty is expressed on a physiological and behavioral level, e.g. in measures of arousal, gaze behavior or posture. The user-focused automation collects corresponding parameters via its sensor technology for the assessment of user states. This data is mapped to user description primitives and known activities, which then are integrated with macroscopic (e.g. location information, road type) and microscopic (e.g. vehicle speed, time-headway) context information, and the current user status and potential causes are determined probabilistically.
The automation identifies e.g. user uncertainty and as a probable cause, the small distances to other road users such as pedestrians and cyclists in a shared space. From this information, which is represented in the situation model as well as information resulting from existing user profiles, the specific need for improving the user’s state is derived. This need, for example an increase in the transparency of the automation, is passed on to the recommender. Taking into account the context information and the user profile, the recommender selects an intervention addressing this need, e.g. displaying the detected road users. This information helps to satisfy the user’s need to understand. At the same time the user notices that this information was presented at the time of his subjective uncertainty. Thus his or her need to be understood is taken into account. The increase in transparency reduces the user’s uncertainty, as does the experience of the system’s adequate reaction to the user’s own uncertainty. As a result, the user can build trust and acceptance for the system.
5 Conclusion and Future Work
The aim of the AutoAkzept project is the development of solutions for user-focused automated automotive systems that are oriented towards basic user needs. Based on this approach a reduction or prevention of subjective uncertainties of users of automated and connected vehicles and thus the guarantee of high user acceptance shall be achieved. For this purpose, the project is developing methods for assessing and representing user states and context information, as well as deriving adequate systemic interventions or adaption strategies. An essential component to achieve this is the definition and specification of a functional architecture of user-focused automated systems to derive need-based systemic adaptations or interventions.
Modern high-resolution and reliable sensor technology forms the basis of automated driving. The fusion of different sensory data streams no longer only allows the detection of individual events, objects or parameters, but also the interpretative mapping of the systemic context as a whole scene. It also guarantees the discreet detection of physiological, emotional and cognitive states of drivers and passengers. However, in the design auf automated systems human beings are usually considered merely as agents acting in a goal-oriented manner with stable, situation-surviving characteristics, whose action goals and intentions are derived without individual differences primarily from a normative understanding of roles (e.g. the driver as supervisor). Rather, human beings must be viewed as a self-changing systems (e.g. physiology and circadian rhythm, changing action motives etc.), but above all as agents with changing states and basic needs that can be influenced by situational conditions as well as the cognitions and emotions triggered by them. These characteristics are insufficiently taken into account in the current design of human-machine-interaction for automated (transport) systems. This can be achieved only if systemic and contextual information is integrated with the current state of human beings, since the user states can only be clearly determined within the systemic and situational context. However, adequate systemic adjustments can only be derived on the basis of such clarity.
In contrast, the assessment of driver and user states in real-time allows an objective selection of the systemic adjustments or interventions. Moreover, it creates the basis for taking into account two basic user needs, the need to understand and the need to be understood. The situation (context) and user state dependency of the interventions or adaptions selected ensures that relevant parameters are optimally adjusted to the user’s needs, so that the adaptations actually satisfies the need to understand of the individual user to the right extent and in an appropriate way. In addition, timely, user-focused systemic adjustments ensure that the user’s need to be understood is satisfied as well.
In this paper we have described the concept of a functional architecture for a user-focused automated system such as a highly automated or autonomous car. The presented architectural approach considers a variety of data sources of different modalities that provide data on the user, the vehicle and the individual and systemic context. Several modules have been integrated into the architecture for the hierarchical processing, aggregation, integration and evaluation of data from these sources. Module-specific functions were described for acquiring user profiles and inferring user preferences, for drawing conclusions on user states and their potential causes, and for deducing context- and user-state-sensitive interventions or adaptions.
Within AutoAkzept most of the described functions and modules are developed, tested and implemented for demonstration under realistic automotive conditions. In the project, however, only a few narrowly defined use cases can be considered, which deal with certain subjective uncertainties of users of automated vehicles. Therefore, future work has to show that the proposed functional architecture allows the scenario-open design of user-focused automation, which furthermore is neither restricted to single user states, nor to specific sensor systems. Two important aspects must be included:
On the one hand, future work has to examine whether the proposed functional structure of the architecture also allows the development of systems that not only focus on various relevant user states (e.g. in addition to uncertainty, frustration [16] or fear [23]), but can also clearly discriminate between them. Only those systems that can detect and differentiate between different relevant user states will be successful, because only they can satisfy the need to be understood.
On the other hand, future work will also have to develop solutions for user-focused systems with respect to use cases with more than a single user. AutoAkzept only considers scenarios with a one-to-one mapping of users and automated systems. But be it in the domain of motorized individual traffic or in the domain of future mobility services such as automated shuttles, there will be use cases with more than one user per automated system. To ensure acceptance of their users, automated systems should maintain a user-focused perspective under such conditions, too, taking into account each user’s need to understand and need to be understood. Hence, architectures for user-focused systems for automated and connected driving must also be designed for such scenarios.
References
Hoyer, R., et al.: Bericht zum Forschungsbedarf. Runder Tisch Automatisiertes Fahren - AG Forschung. Bundesministerium für Verkehr und digitale Infrastruktur, Berlin (2015)
Nordhoff, S., de Winter, J., Kyriakidis, M., van Arem, B., Happee, R.: Acceptance of driverless vehicles: results from a large cross-national questionnaire study. J. Adv. Transp. 2018, 2 (2018). Article ID 5382192
Carsten, O., Martens, M.H.: How can humans understand their automated cars? HMI principles, problems and solutions. Cogn. Technol. Work 21(1), 3–20 (2018). https://doi.org/10.1007/s10111-018-0484-0
Olivera, L., Proctor, K., Burns, C.G., Birell, S.: Driving style: how should an automated vehicle behave? Information 10(6), 219 (2019)
Ruijten, P.A., Terken, J.M.B., Chandramouli, S.N.: Enhancing trust in autonomous vehicles through intelligent user interfaces that mimic human behavior. Multimodal Technol. Interact. 2(4), 62 (2018)
Lee, J.D., Kolodge, K.: Exploring trust in self-driving vehicles through text analysis. Hum. Factors J. Hum. Factors Ergon. Soc. 62, 260–277 (2019)
Walker, F., Verwey, W.B., Martens, M.: Gaze behaviour as a measure of trust in automated vehicles. In: Proceedings of the 6th Humanist Conference, The Hague, Netherlands, June 2018, pp. 13–14 (2018)
Beggiato, M., Hartwich, F., Krems, J.: Using smartbands, pupillometry and body motion to detect discomfort in automated driving. Front. Hum. Neurosci. 12, 338 (2018)
Beggiato, M., Hartwich, F., Schleinitz, K., Krems, J., Othersen, I., Petermann-Stock, I.: What would drivers like to know during automated driving? Information needs at different levels of automation. Paper presented at the 7th International Conference on Driver Assistance (Tagung Fahrerassistenz), Munich, Germany, (2015)
Schieben, A., Wilbrink, M., Kettwich, C., Madigan, R., Louw, T., Merat, N.: Designing the interaction of automated vehicles with other traffic participants: design considerations based on human needs and expectations. Cogn. Technol. Work 21(1), 69–85 (2018). https://doi.org/10.1007/s10111-018-0521-z
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., Nass, C.: Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. (IJIDeM) 9(4), 269–275 (2014). https://doi.org/10.1007/s12008-014-0227-2
Fabio, D., et al.: Bericht der Ethik-Kommission für automatisiertes und vernetztes Fahren. Bundesministerium für Verkehr und digitale Infrastruktur, Berlin (2017)
Maslow, A.H., et al.: Motivation and Personality. Harper and Row, New York (1970)
Lun, J., Kesebir, S., Oishi, S.: On feeling understood and feeling well: the role of interdependence. J. Res. Pers. 42(6), 1623–1628 (2008)
Morelli, S., Torre, B.J., Eisenberger, N.I.: The neural bases of feeling understood and not understood. Soc. Cogn. Affect. Neurosci. 9(12), 1890–1896 (2014)
Ihme, K., Unni, A., Zhang, M., Rieger, J.W., Jipp, M.: Recognizing frustration of drivers from video recordings of the face and measurements of functional near infrared spectroscopy brain activation. Front. Hum. Neurosci. 12, 327 (2018)
Fastenmeier, W., Gstalter, H.: Driving task analysis as a tool in traffic safety research and practice. Saf. Sci. 45(9), 952–979 (2007)
Trende, A., Gräfing, D., Weber, L.: Personalized user profiles for autonomous vehicles. In: Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, pp. 287–291 (2019)
Nagy, A., et al.: U.S. Patent No. 10,449,957. U.S. Patent and Trademark Office, Washington, DC (2019)
Aviezer, H., et al.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–732 (2008)
Barber, D.: Bayesian Reasoning and Machine Learning. Cambridge University Press, Cambridge (2012)
Ricci, F., Rokach, L., Shapira, B.: Introduction to recommender systems handbook. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P.B. (eds.) Recommender Systems Handbook, pp. 1–35. Springer, Boston (2011). https://doi.org/10.1007/978-0-387-85820-3_1
Zhang, M., Ihme, K., Drewitz, U.: Discriminating drivers’ emotions through the dimension of power: evidence from facial infrared thermography and peripheral physiological measurements. Transp. Res. Part F Traffic Psychol. Behav. 63, 135–143 (2018)
Acknowledgment
The authors gratefully acknowledge the financial funding of this work by the German Federal Ministry of Transport and Digital Infrastructure under the grants 16AVF2126A, 16AVF2126B, and 16AVF2126D.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Drewitz, U. et al. (2020). Towards User-Focused Vehicle Automation: The Architectural Approach of the AutoAkzept Project. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. Automated Driving and In-Vehicle Experience Design. HCII 2020. Lecture Notes in Computer Science(), vol 12212. Springer, Cham. https://doi.org/10.1007/978-3-030-50523-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-50523-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50522-6
Online ISBN: 978-3-030-50523-3
eBook Packages: Computer ScienceComputer Science (R0)