Keywords

1 Background

The core of information visualization is the mapping from abstract data to visual structure to strengthen interactive presentation of abstract data. As a result, visual design plays a very important part during the process. However, it is worth noticing that the aim of information visualization doesn’t lie in visualization itself, it’s only a means to an end. The ultimate aim is to collect information on the basis of visualization so as to offer support to decision making [1]. It doesn’t mean the design of boring interface for the sake of functions or the gorgeous pictures for aesthetic forms. In order to convey information in a collect way, designers should combine both aesthetic forms and functions by analyzing huge amounts of complex information and achieve direct visual expression. Excellent works of information visualization is the product of imaginative design aesthetics and rigorous engineering science which present lengthy data in an extremely artistic way and thus achieving balance between aesthetic forms and functions.

Designers try to introduce information visualization to on-board interface design. One reason is that visual system is the most developed sensory system and one of the commonplace and important way for human to interact with electronic interface. Users can make decisions by understanding interface contents with visual information [2]. Another reason is that as information products, on-board equipment is in common use in human life. There are many differences between the designs of on-board interface and that of cell phones and computers. As users spend less time to stare at the screen, designers have to continue their research of interface design under complex driving environment.

For example, HUD head-up display (Fig. 1) is a kind of picture by projecting pictures from the window to extreme distances so that the driver can observe the dashboard while looking at the window.

Fig. 1.
figure 1

HUD head-up display

By conducting basic research on information visualization, building user task model and having eye-tracking test, designers explored the implications and presentation methods of on-board interface interaction so as to form a system based on information visualization.

2 Current Research and Development Trend

2.1 Current Situation of On-Board Interaction Design in the Automobile Industry

In china, with the rapid development of the automobile industry and the expansion and improvement of its variety, on-board equipment is very popular among the consumers. According to this trend, some famous car manufacturers implement on-board display as standard configuration for luxury cars like Audi A6L, Q5 from FAW-Volkswagen, GL8 from SGM, E-level cars from Beijing Benz. The application of on-board equipment in China is still at the initial stage and has broad market prospects [3].

The popularity of smart phones and personal computers has promoted the prosperous development of the internet industry. However, there is also reform is in full spring for the cars at the other edge of the “mobile terminal”. The integration of the information technology has brought new life to the automobile industry and intelligent on-board equipment has now become the top priority of many manufacturers and even internet companies. Participants from various fields are exploring how to create more intelligent on-board systems. Apple, Google and Microsoft are spending efforts into this aspect and came with the invention of Carplay and Android Auto from Apple and Google company respectively. Besides, traditional automobile industry are also conducting relevant research like Ford Sync from Ford, MyLink from Chevrolet, M.M.I from Audi, BlueLINK from Hyundai and so on.

In china, many internet companies and car manufacturers are beginning to conducting related research. Baidu has advantages in technology; its driverless cars have already been tested on the 5th Ring Road in Beijing. Open platforms of Tencent’s car union include MyCar service, Car Union APP and Car Union ROM. Alibaba’s strength lies in its AliCloud. New brand cars has integrated its YunOS for Car operating system with resources like bid data, Ali Communication, Autonavi, AliCloud Computing and XiamiMusic. Besides, Letv company put forward its own on-board system LeUl, mobile phone on-board connection system Ecolink and concept car FF zero1.

The research on interface design started much earlier at abroad and it tends to be more mature. But in China, the relevant research is still at the initial stage. Designers lack related design code when designing interactive interface so they rely much on experiential evaluations. However, just as the development of all things is depended on their internal contradiction, the development of HMI (human machine interface) reflects the relationship and contradiction between human and machine. If human machine relationship has promoted the evolution and advancement of vehicles, then the reform of means of labor and work prompted by vehicles will affect the development of human needs. The evolution of human machine relationship from “toughness” to “elasticity” reflected the transformation of car interface from scattered parts to the organic combination of the hard and soft, from simple operation space to pleasant mobile space (Fig. 2) [4].

Fig. 2.
figure 2

Evolution of the relationship between human and machine

Human machine interaction based on cross-platform and multidisciplinary thinking is the new research direction for on-board interface design. Now much seemingly powerful on-board equipment is complex to operate and comprehend during actual use. Moreover, unreasonable software interface will make the driver feel the difficulty to operate when using it so that it will pose potential threats to driving safety.

2.2 Development Trend of On-Board Interface Design

The opening of the International Conference on Automobile User Interfaces and Interactive Vehicular Applications in 2009 had extremely important meanings. The conference divided the research category of HMI into four parts (Fig. 3). They included devices and interface, automation and instrumentation, evaluation and bench-marking and driver performances and behavior [5].

Fig. 3.
figure 3

Research category of HMI (From auto-ui.org)

On the backdrop of the Internet comes the time for intelligent cars. Car manufacturers further extend their value chains to continuously innovate business structures and varieties. Future cars will combine the advanced on-board sensor and controller with modern communication, information technology to realize the information sharing between cars and X (people, cars, roads and backstage) and possess functions like strong sensation for complex environment, intelligent decision and mutual control. It will eventually be a new generation of cars operating on human behalf.

For the car company, (as illustrated in Fig. 4), according to the report on car consumer behavior in 2004, 80% of the interviewed are willing to pay for smart configurations.

Fig. 4.
figure 4

Report on car consumer behavior

Viewing from the difference in their birth year, “post 85s” will be more willing to improve their budgets for smart configurations than people born before 1985. In terms of the performance and convenience, the interviewed said that they long most for “intelligent navigation system” and “automatic parking system” which are favored by 30% of consumers. Besides, consumers who hope for “driverless cars” also accounted for 20%.

The design of on-board interface has not only inherited parts concerning interactive design and information visualization but also has added its specific interactions. It includes not only information visualization design, interactive design, but also human machine engineering design, context awareness as well as task mode design. It can present text and static pictures like traditional interface as well as motion pictures and videos which provides designers with new design topics as well as a new design space. The development of on-board interface design is also becoming more mature. It can improve the utility of the interface which makes it more easy and efficient for operation and have instructive significance to the design and research of software interface.

3 Research Methods and Contents

Based on the theory of information visualization, research content is the study of on board interface design. By conducting research on users, designers try to study users’ behavioral, physical and psychological features and analyze the methods and forms of visualized design inside the cars do as to build an on-board interface system based on information visualization (Fig. 5).

Fig. 5.
figure 5

Mold for on-board interface design

3.1 The Concept of Information Visualization

The concept of information visualization is developing rapidly. At first, information is only pieces of coarse raw data. The appearance of chart is the beginning of graph and its aim is to making abstract data easier to understand. With the advancement of technology and internet, different kinds of graphs began to appear which can help people to analyze and identify potential problems. Particularly, the rise of real-time and dynamic interactive visualization has greatly improved people’s ability for analyzing and processing information.

Through interaction, people can screen and filter information independently and adapt appropriate ways to search for information in order to find the hidden rules for solving problems.

3.2 The Concept of Visual Attention

The fundamental function of attention is selection. The core of attention is selective analysis of information. By combing finite psychological resources, visual attention begins to identify important information in finite time. The processing method of visual sensation and visual perception are parallel and serial types respectively [6]. There is difference between them in terms of the amount of information they process. Visual sensation can possess much information than the latter one by make connections through visual attention mechanism. It can also stay at the front of the whole visual perception process and is a reliable guarantee for the identification process. (In Fig. 6), green right-angle from (A), big red circle from (B), two double sided arrow from (C) can catch our eyes very easily. That’s to say, visual attention plays a significant role during the process of visual recognition.

Fig. 6.
figure 6

Example of visual attention (Color figure online)

3.3 Visual Attention Model

Attention mechanism exerts great influence on visual perception. But relevant research on its working mechanism is still need to be deeper. Based on the research of visual attention mechanism, researchers from various circles put forward a range of theories and hypotheses. Filter model was proposed by Broadbent (Fig. 7). Although there are many stimulations to process passageways, information can only enter into advanced analytic processing channel through one single passageway which reflects the role of filter in attention selection.

Fig. 7.
figure 7

Filter model

Professor Dewenci proposed the response selection model (Fig. 8). According to him, visual stimulation of information can step into advanced level by many passageways which can be perceived. While the attention mechanism acts as a feedback of stimulation rather than visual stimulation itself.

Fig. 8.
figure 8

The response selection model

The process of analyzing visual attention mechanism can presented as transforming complex visual stimulation into several simple visual recognition tasks. These simple processes mainly reflected in two ways: information recognition and information positioning.

In fact, the first step of the whole process of visual information positioning is to match subconscious memory to select the target that match the process. As we can see from (Fig. 9), the whole process is not the overall understanding of the object, but search according to basic information like directions and colors. By matching these characteristics with memory and getting feedback, these target information will be put into further analysis as key points and be used to position interest sectors in order to offer information support for separating pictures, identifying targets and analyzing situations in the following stage.

Fig. 9.
figure 9

The process of visual attention positioning

3.4 Forming On-Board Interface Task Models Based on Information Visualization

Information visualization is a serious of controllable processes that transforming raw data into visible forms and then into human perception. It is impossible to support the dynamic process of data analysis by a static picture. Designers have to conduct interactive analyses with graphic elements within the visible interface to achieve analysis target according to the needs of users. The setting of models during the process actually denoted objective set of visible analysis. As a result, the task model theory is a significant basis which support and help users to identify the process and guide the design of visual analysis system.

Firstly, conducting research from the perspective of high-end user targets and focusing on the users’ intention. Secondly, starting from user activities and focusing on user behaviors. Thirdly, starting from the system itself and focusing on the system function structure. At last, integrating multitasks based on users’ operation behaviors (Fig. 10). It is stated like this:

Fig. 10.
figure 10

Users’ operation behaviors

To a certain extent, user target determines the whole framework of the on-board system. Its framework in turn also influences user target. At present, on-board system in the market mainly have four functions: electronic navigation, entertainment, shape control and mobile connection (Fig. 11).

Fig. 11.
figure 11

Model framework

User activities include gesture interaction, speech recognition, body sensation and eye tracking. Among them, gesture interaction is the most universal and can easily achieve by technology. By applying interactive gestures like simple or double clicking, multi-touch, kneading and erasure to on-board system can better integrate it with the whole framework and connecting multi-tasks (Fig. 12).

Fig. 12.
figure 12

Multi-task model

Embedded interaction of products links classical interactive function with product function, which means that interaction can achieved by physical and digital ways. It can reach harmony between two sides and form the design trend for the integration between engineering design and interactive design.

Embedded and distributed ways of interaction and the continual appearance of new technology have greatly promoted the interactive design of the automobile like the design of Buick’s concept car Riviera in 2013 (Fig. 13) and Benz’s concept car FCV (Fig. 14) [7]. The combination of overall indoor design with human machine interface reflected the trend of the integration between engineering design and interactive design.

Fig. 13.
figure 13

Buick’s concept car riviera

Fig. 14.
figure 14

Benz’s concept car FCV

3.5 Eye Tracking Test Under Different Circumstances

Using the test result to describe and research user behaviors so as to make evaluations on the utility of the tested interface by referring to the variables achieved from the test. From the perspective of fixation time, fixation hot spot, sight path and task response, designers prepare the materials for the test and finally collect and analyze the experimental data (Fig. 15).

Fig. 15.
figure 15

Eye tracking test

3.6 Comparing and Analyzing the Experimental Data of the Eye Tracking Test, Concluding Factors Influencing the Driver During the Using Process and Proposing Appropriate Design System for On-Board Interface

According to the data analysis of the hot spot, sight path, cluster and the number of attention areas, designers understand factors influencing driver’s performance and finally form on-board interface design system based on information visualization by linking task models. It can achieve visualization in function, structure, performance and controls and eventually reach the goal of utility and user experience. It should not only realize fundamental functions but also bring users with comfortable and harmonious operation experience.

With the further research of theory and the rise of precision device, the application of eye tracking technology in the field of interface design is becoming more mature day by day. It has the following three measurement indexes:

  1. â‘ 

    Search process

    • Scanning path. Scanning path is the spatial distribution of fixation points and saccades on the interface. The length of the path is the distance of two continual fixation points.

    • Number of saccade. By observing the number of visual search behavior, the number of saccade can reveal the relevant organization degree of the screen.

    • Frequency of saccade. Frequency of saccade refers to the spatial distance between fixation points.

  2. ②

    Manufacturing process

    • Number of fixation points. The total number of fixation points concerns users’ search efficiency. The unreasonable distribution of interface elements will cause the increase of the number of fixation points.

    • Fixation rate of the interest sectors. Fixation rate of the interest sectors means the time ratio between the eye’s fixations on the area.

    • Average fixation time. Average fixation time reflects the difficulty of acquiring information. A long-time fixation means that it’s rather difficult to acquire information from the interface. That’s to say, there exists unreasonable factors on the design of interface zone.

    • Number of fixation in the interest area. Number of fixation in the interest area is the total number of fixation in areas or particular elements preset by users. it can be used to check the visibility of interface elements and its surface meaning.

  3. ③

    other indexes

    Apart from measurement indexes mentioned above, there are also other indexes like retrospective saccade, hit rates and fixation times after finding the target. All these indexes can make usable evaluations of the software interface in a more profound way.

4 Data Analysis Based on Multi-task Multi-view Learning

Multi-task multi-view learning (MTMV) is a hot research topic of machining learning, which integrates muti-task learning (MTL) and multi-view learning (MVL) together [8]. As shown by (Fig. 16), many real-world problems contain more than one kind of information, known as multi-view data. Each view reflects one part of the problem’s characteristics, and provides us one perspective to understand the problem. Compared with learning from the single view data, learning from multi-view can make better use of these different views and get improved results. Besides that, many real-world problems are similar or related to each other. For these problems, learning them jointly by using the multi-task learning strategies can usually improve the performance of each task compared to learning them separately. Therefore, as the combination of MTL and MVL, MTMV learning usually has a better performance.

Fig. 16.
figure 16

Graphical representation of multi-view multi-task learning framework

The typical procedure for MTMV learning is to construct a model which reflects the relationships among different views and tasks first, then the objective function which represents this model should be converted to a convex one, and its parameters will be optimized by the alternative iteration method, which means optimizing these parameters alternately, every parameter is optimized while the others are fixed. After that, the prediction is possible to be done utilizing this trained model.

Generally speaking, the model of MTMV include 4 parts: the loss function measuring the misclassification, the sparse regulation, and 2 regularizations between task-task and view-view, respectively.

In order to fomulate the model of MTMV, we denote matrices with bold uppercase letters (e.g. X), vectors with bold lowercase letters (e.g. x), and scalars with italic letters (e.g. x), respectively. As illustrated by Fig. 1(a), suppose we have \( T \) related tasks \( (Task_{1} ,Task_{2} , \cdots ,Task_{T} ) \) in totally. For each instance in \( Task_{t} \)(\( t \in [1,T] \)), it is described from \( V \) views (\( View_{{{\text{t}}1}} ,View_{t2} , \cdots ,View_{tV} ) \). For the \( v \)-th view, we collect \( M_{v} \) features, let \( M = \mathop \sum \limits_{v = 1}^{V} M_{v} \). And for \( Task_{t} \), \( N_{t} \) is used to denote the number of instances it contains \( (Instance_{1} ,Instance_{2} , \cdots ,Instance_{{N_{t} }} ) \). Specially, for \( View_{v} \) in \( Task_{t} \), the feature matrix is \( {\mathbf{X}}_{t}^{v} \in {\mathbb{R}}^{{N_{t} \times M_{v} }} \), and \( {\mathbf{w}}_{t}^{v} \in {\mathbb{R}}^{{M_{v} }} \) parameterize the linear mapping function. For convenience, as shown by Fig. 2(a), we also denote \( {\mathbf{X}}_{t} = ({\mathbf{X}}_{t}^{1} ,{\mathbf{X}}_{t}^{2} , \cdots ,{\mathbf{X}}_{t}^{V} ) \in {\mathbb{R}}^{{N_{t} \times M}} \) and \( {\mathbf{w}}_{t} = ({\mathbf{w}}_{t}^{1} ,{\mathbf{w}}_{t}^{2} , \cdots ,{\mathbf{w}}_{t}^{V} ) \in {\mathbb{R}}^{M \times 1} \) as the concatenated feature matrix and parameters matrix for \( Task_{t} \), respectively. Additionally, for the matrix \( {\mathbf{X}}_{t} \), its i-th row and j-th column are denoted as \( ({\mathbf{x}}_{t} )^{i} \) and \( ({\mathbf{x}}_{t} )_{j} \) respectively. That is, from the sight of instance, \( {\mathbf{X}}_{t} \) also can be understood as \( {\mathbf{X}}_{t} = (({\mathbf{x}}_{t} )^{1} ,({\mathbf{x}}_{t} )^{2} , \cdots ,({\mathbf{x}}_{t} )^{{N_{t} }} )^{T} \), where \( ({\mathbf{x}}_{t} )^{i} \in {\mathbb{R}}^{1 \times M} \) is the feature vector of \( Instance_{i} \) in \( Task_{t} \), as shown by each row in matrix \( {\mathbf{X}}_{t} \). Conversely, \( {\mathbf{X}}_{t} \) also can be understood from the sight of feature as \( {\mathbf{X}}_{t} = (({\mathbf{x}}_{t} )_{1} ,({\mathbf{x}}_{t} )_{2} , \cdots ,({\mathbf{x}}_{t} )_{M} ) \), in which \( ({\mathbf{x}}_{t} )_{j} \in {\mathbb{R}}^{{N_{t} \times 1}} \) is the j-th feature vector in \( Task_{t} \), shown by the j-th column in matrix \( {\mathbf{X}}_{t} \). In term of the parameters matrix, as shown in Fig. 2(b), for convenience, let \( {\mathbf{W}} = [{\mathbf{w}}_{1} ,{\mathbf{w}}_{2} , \cdots {\mathbf{w}}_{T} ] \in {\mathbb{R}}^{M \times T} \). Each column in W is the coefficient vector for classifying the subjects of the specific task, \( {\mathbf{W}} = (({\mathbf{w}})_{1} ,({\mathbf{w}})_{2} , \cdots ,({\mathbf{w}})_{T} ) \). Obviously, \( ({\mathbf{w}})_{t} = {\mathbf{w}}_{t} \). In the same row, the coefficients record the contributions of the same feature toward different tasks, \( {\mathbf{W}} = (({\mathbf{w}})^{1} ,({\mathbf{w}})^{2} , \cdots ,({\mathbf{w}})^{M} ) \). Let \( {\mathbf{y}}_{t} = [y_{t,1} ,y_{t,2} , \cdots y_{{t,N_{t} }} ] \in \{ 1, - 1\}^{{N_{t} \times 1}} \) be the vector of training labels in \( Task_{t} \).

(1) Loss Function

Minimizing the difference between the known label \( {\mathbf{y}}_{t} \) and the prediction results, means minimizing the misclassification on labeled examples.

$$ L = \sum\nolimits_{t = 1}^{T} {\left\| {{\mathbf{y}}_{t} - {\mathbf{X}}_{t} {\mathbf{w}}_{t} } \right\|}_{2}^{2} $$
(1)

(2) Sparse Regulation

The adoption of sparse regulation generally has two purposes: simplify the parameters matrix, and select common features across different tasks. Therefore, here we use l 2,1-norm to simulate this regulation:

$$ R_{sparse} = \left\| {\mathbf{W}} \right\|_{2,1} $$
(2)

It has two advantages: Firstly, it ensures that a small number of features are jointly selected for all tasks. Secondly, the coefficients are encouraged to be similar across different tasks for joint feature selection at the same time.

(3) Task-Task Regularization

When modeling the task-task relationship, we mainly focus on the fact that, if the features of \( Task_{i} \) and \( Task_{j} \) are closely related, their corresponding coefficient vectors \( {\mathbf{w}}_{i} \) and \( {\mathbf{w}}_{j} \) should also be similar. In order to formulate this fact, we first measure the difference between \( Task_{i} \) and \( Task_{j} \) as follows:

$$ g_{i,j} = exp( - 2\left\| {{\bar{\mathbf{x}}}_{i} - {\bar{\mathbf{x}}}_{j} } \right\|_{2}^{2} /\sigma^{2} ) $$
(3)

Where \( {\bar{\mathbf{x}}}_{i} = \frac{1}{{N_{i} }}\mathop \sum \limits_{ins = 1}^{{N_{i} }} ({\mathbf{x}}_{i} )^{ins} \) and \( {\bar{\mathbf{x}}}_{j} = \frac{1}{{N_{j} }}\mathop \sum \limits_{ins = 1}^{{N_{j} }} ({\mathbf{x}}_{j} )^{ins} \) are the mean vector of \( {\mathbf{X}}_{i} \) and \( {\mathbf{X}}_{j} \), and \( \sigma^{2} = \mathop \sum \limits_{i = 1}^{T} \mathop \sum \limits_{j = 1}^{T} \left\| {{\bar{\mathbf{x}}}_{i} - {\bar{\mathbf{x}}}_{j} } \right\|_{2}^{2} /T^{2} \). The larger the \( g_{i,j} \) is, the more similar these two tasks are. Therefore, by minimizing the product of \( g_{i,j} \) and the difference between \( {\mathbf{w}}_{i} \) and \( {\mathbf{w}}_{j} \), we could encourage the convergence of \( {\mathbf{w}}_{i} \) and \( {\mathbf{w}}_{j} \) when \( Task_{i} \) and \( Task_{j} \) are closely related. Consequently, the relationship between tasks could be formulated as:

$$ R_{task - task} = \sum\nolimits_{i \ne j}^{T} {g_{i,j} \left\| {{\mathbf{w}}_{i} - {\mathbf{w}}_{j} } \right\|}_{2}^{2} $$
(4)

(4) View-View Regularization

Although we observe an instance from V different views, it is reasonable to presume that the discriminant functions for different views tend to yield the identical label:

$$ R_{view - view} = \sum\nolimits_{t = 1}^{T} {\sum\nolimits_{p,q = 1}^{V} {\left\| {{\mathbf{X}}_{t}^{p} {\mathbf{w}}_{t}^{p} - {\mathbf{X}}_{t}^{q} {\mathbf{w}}_{t}^{q} } \right\|_{2}^{2} } } $$
(5)

Therefor, the final model of MTMV would be illustrated as Eq. (6):

$$ { \hbox{min} }F({\mathbf{W}}) = \sum\nolimits_{i = 1}^{4} {\mu_{i} \cdot f_{i} ({\mathbf{W}})} $$
(6)

Where \( \mu_{i} \), is a weight in the model, which should be specified based on the users’ preference in a particular classification problem. \( f_{1} ({\mathbf{W}}) \), \( f_{2} ({\mathbf{W}}) \), \( f_{3} ({\mathbf{W}}) \), \( f_{4} ({\mathbf{W}}) \), respectively represent \( L \), \( R_{sparse} \), \( R_{task - task} \), \( R_{view - view} \).

Since there are a lot of variation in designing the interface of on-board display devices for cars, we apply MTMV learning strategy to optimize our design. Specifically, Eye Tracking System and Digitizing Collection, and others means are adopted in this paper to make a statistical analysis on the layout of control interface, optimum position of control button, and the setting of the interactive mode of each function by investigating six groups, including elder men (Age > = 60, Gender = Male), elder women (Age > = 60, Gender = Female), middle-aged men (Age, Gender = Male), middle-aged women (Age, Gender = Female), young men (Age, Gender = Male), and young women (Age, Gender = Female). That is, the author collects the optimal experience data of six different groups for each function of the on-board display devices to form a six-view information of on-board display devices. In addition, the author collects six-view information of the above groups aiming at five mainstream cars, including ordinary two-seater sports car, five-seater car, seven-seater commercial vehicle, sport-utility vehicle (SUV), and cargo truck. The author makes analysis on the six-view and five-task information and finally obtains corresponding design scheme and proposal for interface design of on-board display devices in various car models which can bring the optimal experience to users.

5 Conclusion: The Application of Visual Selection on On-Board Interaction Design

By referring to eye tracking experiment and multi-objective test, we can conclude that driver’s interaction with and response to the outer environment comes from the input and identification of visual information. The process of noticing, responding and acting is a very complex one, among which visual selection is the most significant basis. Driver’s initial process of visual selection is mainly caused by the environment. Here environment refers to exterior environment as well as the inside interface display (Fig. 17). The advanced stage of visual selection is determined by driver’s ability, experience and the corresponding concept model. As a result, when conducting on-board interaction design, designers have to take urgent information as top priority and processing information according to their degree of emergency in order to keep drivers well informed of the danger. In that way drivers can step into the advanced stage of visual selection and behavioral stage more quickly.

Fig. 17.
figure 17

The inside interface display

Cognitive capture means the phenomenon of people’s feeling of being disturbed by many stimulations. It is often in the way of visual stimulation, but other ones like hearing stimulation can also play a very important part. The influence of stimulation on drivers is usually determined by the brain load intensity brought by the information. Overloaded information will occupy many of driver’s attention resources and dilute important information during the driving process. Although HUD can directly present information on the front windscreen and make it easier for drivers to receive relevant information, it will overlap with the external environment. For example, input ways using texts will weaken the existence of real roads, thus make them less noticeable to drivers. So when emphasizing important information, designers have to make the information easier to identify and also proving certain amount of time for presentation so that drivers can make prompt response. It is also necessary to set time for information presentation on on-board interface.

The research on product development suggested that product development process of hardware and software product have structural similarities which can correspond to their design rules, but they also have their own professional features. The design target of HMI has transcended interior and interactive design of cars. It’s a significant factor generated from the intersection of these fields. We believe that there are overlaps with the design target of exterior trims and hardware HMI interface of cars. The manifestation is the display and controlling units in that they are not only design elements concerning interior appearance and details but also the main design target for hardware HMI interface. We also believe that software HMI interface needs to face complex interaction situation, driving tasks and interaction tasks. The understanding of interaction task will be one-sided if it is free from the boundary of driving task. The interaction methods and visual design of software designs depends on physical space provided by hardware HMI interface. Besides, the functional design of the display and controlling units of hardware interface can also reflect the configuration of software’s system.

Despite its mature development, the start points of cars are safety and efficiency. Although the safety problem has already been solved technically, the task of interaction is becoming increasingly complex. How to elaborate designs to deal with complex problems when competing for driving resources? Under the backdrop of driverless cars, “safety” will rising from basic needs to user experience. Efficiency concerns not only the efficiency of machine operation but also machine’s adaptability to human. From the perspective of human machine engineering, it reflected the concept of “machine to human”. Offering solution which matches function can show the design’s respect to human and thus evoke emotional felling. With the popularity of intelligent driving and driverless technology in future, the relationship between human and cars will be more diversified. Automobile HMI interface will bring users with experience beyond their expectation.