利用可解释的机器学习,了解高级驾驶辅助系统负面警告对卡车司机危险物质反应的影响

IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Yichang Shao , Yueru Xu , Zhirui Ye , Yuhan Zhang , Weijie Chen , Nirajan Shiwakoti , Xiaomeng Shi
{"title":"利用可解释的机器学习,了解高级驾驶辅助系统负面警告对卡车司机危险物质反应的影响","authors":"Yichang Shao ,&nbsp;Yueru Xu ,&nbsp;Zhirui Ye ,&nbsp;Yuhan Zhang ,&nbsp;Weijie Chen ,&nbsp;Nirajan Shiwakoti ,&nbsp;Xiaomeng Shi","doi":"10.1016/j.engappai.2025.110308","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, Artificial Intelligence (AI) has significantly enhanced road safety, with Explainable Artificial Intelligence (XAI) providing essential transparency and trust. Our research utilizes AI to improve Advanced Driving Assistance Systems (ADAS) by investigating the gap in Forward Collision Warning (FCW): the impact of previous negative warnings (false and nuisance warnings) on drivers’ response times to subsequent accurate FCWs. By integrating XAI methods, we offer insights into the factors affecting driver behavior and system trust. Utilizing extensive dataset that encompasses various driving scenarios and driver behaviors, we constructed a gradient-boosting machine model to forecast driver response times. To explain the underlying mechanics of the model, the Shapley Additive Explanations (SHAP) framework was employed, enabling a comprehensive interpretation of feature importance and inter-feature interactions. Key findings reveal that increased speeds heighten driver responsiveness due to amplified alertness, whereas slower speeds lead to delayed reactions. The influence of previous negative warnings, significantly extends response times to accurate warnings. Additionally, older drivers require longer response times. The relationship between the driving period and previous warning judgment profoundly affects subsequent driver responsiveness, indicating trust dynamics with FCW systems. By using interpretable machine learning, we provide insights into ADAS functionality, suggesting pathways for FCW responsiveness and contributing to the field of XAI applications. In the validation experiment, our approach improved driver response times, reducing the average time from 2.1 s to 1.6 s. The proportion of ignored warnings decreased from 12% to 6%, and the driver acceptance rate increased from 59% to 71%.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"146 ","pages":"Article 110308"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Understanding the impacts of negative advanced driving assistance system warnings on hazardous materials truck drivers’ responses using interpretable machine learning\",\"authors\":\"Yichang Shao ,&nbsp;Yueru Xu ,&nbsp;Zhirui Ye ,&nbsp;Yuhan Zhang ,&nbsp;Weijie Chen ,&nbsp;Nirajan Shiwakoti ,&nbsp;Xiaomeng Shi\",\"doi\":\"10.1016/j.engappai.2025.110308\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, Artificial Intelligence (AI) has significantly enhanced road safety, with Explainable Artificial Intelligence (XAI) providing essential transparency and trust. Our research utilizes AI to improve Advanced Driving Assistance Systems (ADAS) by investigating the gap in Forward Collision Warning (FCW): the impact of previous negative warnings (false and nuisance warnings) on drivers’ response times to subsequent accurate FCWs. By integrating XAI methods, we offer insights into the factors affecting driver behavior and system trust. Utilizing extensive dataset that encompasses various driving scenarios and driver behaviors, we constructed a gradient-boosting machine model to forecast driver response times. To explain the underlying mechanics of the model, the Shapley Additive Explanations (SHAP) framework was employed, enabling a comprehensive interpretation of feature importance and inter-feature interactions. Key findings reveal that increased speeds heighten driver responsiveness due to amplified alertness, whereas slower speeds lead to delayed reactions. The influence of previous negative warnings, significantly extends response times to accurate warnings. Additionally, older drivers require longer response times. The relationship between the driving period and previous warning judgment profoundly affects subsequent driver responsiveness, indicating trust dynamics with FCW systems. By using interpretable machine learning, we provide insights into ADAS functionality, suggesting pathways for FCW responsiveness and contributing to the field of XAI applications. In the validation experiment, our approach improved driver response times, reducing the average time from 2.1 s to 1.6 s. The proportion of ignored warnings decreased from 12% to 6%, and the driver acceptance rate increased from 59% to 71%.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"146 \",\"pages\":\"Article 110308\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625003082\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625003082","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

近年来,人工智能(AI)显著提高了道路安全,可解释人工智能(XAI)提供了必要的透明度和信任。我们的研究利用人工智能来改进高级驾驶辅助系统(ADAS),通过调查前方碰撞警告(FCW)的差距:之前的负面警告(虚假和滋扰警告)对驾驶员对随后准确的FCW的响应时间的影响。通过集成XAI方法,我们可以深入了解影响驾驶员行为和系统信任的因素。利用包含各种驾驶场景和驾驶员行为的广泛数据集,我们构建了一个梯度增强机器模型来预测驾驶员的响应时间。为了解释模型的基本机制,采用了Shapley加性解释(SHAP)框架,从而能够全面解释特征重要性和特征间的相互作用。主要研究结果显示,车速加快会增强驾驶员的警觉性,从而提高反应能力,而车速较慢则会导致反应延迟。先前负面警告的影响,显著延长了准确警告的响应时间。此外,老司机需要更长的响应时间。驾驶周期与之前的预警判断之间的关系深刻地影响了后续驾驶员的响应,表明FCW系统的信任动态。通过使用可解释的机器学习,我们提供了对ADAS功能的见解,提出了FCW响应的途径,并为XAI应用领域做出了贡献。在验证实验中,我们的方法提高了驾驶员的响应时间,将平均时间从2.1秒减少到1.6秒。忽略警告的比例从12%下降到6%,驾驶员接受率从59%上升到71%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Understanding the impacts of negative advanced driving assistance system warnings on hazardous materials truck drivers’ responses using interpretable machine learning

Understanding the impacts of negative advanced driving assistance system warnings on hazardous materials truck drivers’ responses using interpretable machine learning
In recent years, Artificial Intelligence (AI) has significantly enhanced road safety, with Explainable Artificial Intelligence (XAI) providing essential transparency and trust. Our research utilizes AI to improve Advanced Driving Assistance Systems (ADAS) by investigating the gap in Forward Collision Warning (FCW): the impact of previous negative warnings (false and nuisance warnings) on drivers’ response times to subsequent accurate FCWs. By integrating XAI methods, we offer insights into the factors affecting driver behavior and system trust. Utilizing extensive dataset that encompasses various driving scenarios and driver behaviors, we constructed a gradient-boosting machine model to forecast driver response times. To explain the underlying mechanics of the model, the Shapley Additive Explanations (SHAP) framework was employed, enabling a comprehensive interpretation of feature importance and inter-feature interactions. Key findings reveal that increased speeds heighten driver responsiveness due to amplified alertness, whereas slower speeds lead to delayed reactions. The influence of previous negative warnings, significantly extends response times to accurate warnings. Additionally, older drivers require longer response times. The relationship between the driving period and previous warning judgment profoundly affects subsequent driver responsiveness, indicating trust dynamics with FCW systems. By using interpretable machine learning, we provide insights into ADAS functionality, suggesting pathways for FCW responsiveness and contributing to the field of XAI applications. In the validation experiment, our approach improved driver response times, reducing the average time from 2.1 s to 1.6 s. The proportion of ignored warnings decreased from 12% to 6%, and the driver acceptance rate increased from 59% to 71%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信