A multimodal human-machine interface enabling situation-adaptive control inputs for highly automated vehicles | IEEE Conference Publication | IEEE Xplore

A multimodal human-machine interface enabling situation-adaptive control inputs for highly automated vehicles


Abstract:

Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback p...Show More

Abstract:

Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback performance of the DDT, during a trip. Such vehicles create the need for novel human-machine interfaces (HMIs) designed to conduct high-level vehicle control tasks. Multimodal interfaces (MMIs) have advantages such as improved recognition, faster interaction, and situation-adaptability, over unimodal interfaces. In this study, we developed and evaluated a MMI system with three input modalities; touchscreen, hand-gesture, and haptic to input tactical-level control commands (e.g. lane-changing, overtaking, and parking). We conducted driving experiments in a driving simulator to evaluate the effectiveness of the MMI system. The results show that multimodal HMI significantly reduced the driver workload, improved the efficiency of interaction, and minimized input errors compared with unimodal interfaces. Moreover, we discovered relationships between input types and modalities: location-based inputs-touchscreen interface, time-critical inputs-haptic interface. The results proved the functional advantages and effectiveness of multimodal interface system over its unimodal components for conducting tactical-level driving tasks.
Date of Conference: 11-14 June 2017
Date Added to IEEE Xplore: 31 July 2017
ISBN Information:
Conference Location: Los Angeles, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.