loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Mir Riyanul Islam ; Mobyen Uddin Ahmed and Shahina Begum

Affiliation: Artificial Intelligence and Intelligent Systems Research Group, School of Innovation Design and Engineering, Mälardalen University, Universitetsplan 1, 722 20 Västerås, Sweden

Keyword(s): Artificial Intelligence, Driving Behaviour, Feature Attribution, Evaluation, Explainable Artificial Intelligence, Interpretability, Road Safety.

Abstract: Understanding individual car drivers’ behavioural variations and heterogeneity is a significant aspect of developing car simulator technologies, which are widely used in transport safety. This also characterizes the heterogeneity in drivers’ behaviour in terms of risk and hurry, using both real-time on-track and in-simulator driving performance features. Machine learning (ML) interpretability has become increasingly crucial for identifying accurate and relevant structural relationships between spatial events and factors that explain drivers’ behaviour while being classified and the explanations for them are evaluated. However, the high predictive power of ML algorithms ignore the characteristics of non-stationary domain relationships in spatiotemporal data (e.g., dependence, heterogeneity), which can lead to incorrect interpretations and poor management decisions. This study addresses this critical issue of ‘interpretability’ in ML-based modelling of structural relationships between the events and corresponding features of the car drivers’ behavioural variations. In this work, an exploratory experiment is described that contains simulator and real driving concurrently with a goal to enhance the simulator technologies. Here, initially, with heterogeneous data, several analytic techniques for simulator bias in drivers’ behaviour have been explored. Afterwards, five different ML classifier models were developed to classify risk and hurry in drivers’ behaviour in real and simulator driving. Furthermore, two different feature attribution-based explanation models were developed to explain the decision from the classifiers. According to the results and observation, among the classifiers, Gradient Boosted Decision Trees performed best with a classification accuracy of 98.62%. After quantitative evaluation, among the feature attribution methods, the explanation from Shapley Additive Explanations (SHAP) was found to be more accurate. The use of different metrics for evaluating explanation methods and their outcome lay the path toward further research in enhancing the feature attribution methods. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.224.63.87

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Islam, M.; Ahmed, M. and Begum, S. (2023). Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-623-1; ISSN 2184-433X, SciTePress, pages 392-404. DOI: 10.5220/0011801000003393

@conference{icaart23,
author={Mir Riyanul Islam. and Mobyen Uddin Ahmed. and Shahina Begum.},
title={Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2023},
pages={392-404},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011801000003393},
isbn={978-989-758-623-1},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data
SN - 978-989-758-623-1
IS - 2184-433X
AU - Islam, M.
AU - Ahmed, M.
AU - Begum, S.
PY - 2023
SP - 392
EP - 404
DO - 10.5220/0011801000003393
PB - SciTePress