Jiming Xie , Yan Zhang , Yaqin Qin , Bijun Wang , Shuai Dong , Ke Li , Yulan Xia
{"title":"Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles","authors":"Jiming Xie , Yan Zhang , Yaqin Qin , Bijun Wang , Shuai Dong , Ke Li , Yulan Xia","doi":"10.1016/j.trip.2024.101278","DOIUrl":null,"url":null,"abstract":"<div><div>To achieve trustworthy human-like decisions for autonomous vehicles (AVs), this paper proposes a new explainable framework for personalized human-like driving intention analysis. In the first stage, we adopt a spectral clustering method for driving style characterization, and introduce a misclassification cost matrix to describe different driving needs. Based on the parallelism in the complex neural network of human brain, we construct a Width Human-like neural network (WNN) model for personalized cognitive and human-like driving intention decision making. In the second stage, we draw inspiration from the field of brain-like trusted AI to construct a robust, in-depth, and unbiased evaluation and interpretability framework involving three dimensions: Permutation Importance (PI) analysis, Partial Dependence Plot (PDP) analysis, and model complexity analysis. An empirical investigation using real driving trajectory data from Kunming, China, confirms the ability of our approach to predict potential driving decisions with high accuracy while providing the rationale implicit AV decisions. These findings have the potential to inform ongoing research on brain-like neural learning and could function as a catalyst for developing swifter and more potent algorithmic solutions in the realm of intelligent transportation.</div></div>","PeriodicalId":36621,"journal":{"name":"Transportation Research Interdisciplinary Perspectives","volume":"29 ","pages":"Article 101278"},"PeriodicalIF":3.9000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Interdisciplinary Perspectives","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590198224002641","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 0
Abstract
To achieve trustworthy human-like decisions for autonomous vehicles (AVs), this paper proposes a new explainable framework for personalized human-like driving intention analysis. In the first stage, we adopt a spectral clustering method for driving style characterization, and introduce a misclassification cost matrix to describe different driving needs. Based on the parallelism in the complex neural network of human brain, we construct a Width Human-like neural network (WNN) model for personalized cognitive and human-like driving intention decision making. In the second stage, we draw inspiration from the field of brain-like trusted AI to construct a robust, in-depth, and unbiased evaluation and interpretability framework involving three dimensions: Permutation Importance (PI) analysis, Partial Dependence Plot (PDP) analysis, and model complexity analysis. An empirical investigation using real driving trajectory data from Kunming, China, confirms the ability of our approach to predict potential driving decisions with high accuracy while providing the rationale implicit AV decisions. These findings have the potential to inform ongoing research on brain-like neural learning and could function as a catalyst for developing swifter and more potent algorithmic solutions in the realm of intelligent transportation.