XAI-AV:可解释的自动驾驶汽车信任管理人工智能

Harsh Mankodiya, M. Obaidat, Rajesh Gupta, S. Tanwar
{"title":"XAI-AV:可解释的自动驾驶汽车信任管理人工智能","authors":"Harsh Mankodiya, M. Obaidat, Rajesh Gupta, S. Tanwar","doi":"10.1109/CCCI52664.2021.9583190","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.","PeriodicalId":136382,"journal":{"name":"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"XAI-AV: Explainable Artificial Intelligence for Trust Management in Autonomous Vehicles\",\"authors\":\"Harsh Mankodiya, M. Obaidat, Rajesh Gupta, S. Tanwar\",\"doi\":\"10.1109/CCCI52664.2021.9583190\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.\",\"PeriodicalId\":136382,\"journal\":{\"name\":\"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCCI52664.2021.9583190\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCCI52664.2021.9583190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

人工智能(AI)是最受关注的技术,在所有领域都有广泛的应用,无论是智能交通系统(ITS)、医学、医疗保健、军事行动还是其他领域。其中一个应用是自动驾驶汽车(AVs),它属于ITS中的人工智能类别。车辆自组织网络(VANET)使系统中自动驾驶汽车之间的通信成为可能。每辆车的性能取决于自动驾驶汽车之间的信息交换。虚假或恶意的信息会扰乱整个系统,导致严重的后果。因此,检测恶意车辆是至关重要的。我们使用机器学习(ML)算法来预测传输数据中的缺陷。最近使用堆叠ML方法的论文给出了98.44%的准确率。本文采用基于决策树的随机森林来解决这一问题。本文在VeRiMi数据集上的准确率和F1分数分别达到了98.43%和98.5%。可解释人工智能(XAI)是一种使复杂的黑箱机器学习和深度学习(DL)模型更具可解释性和可理解性的方法和技术。我们使用评估度量的特定模型接口来解释和度量模型的性能。将XAI应用于这些复杂的AI模型,可以确保自动驾驶汽车谨慎使用AI。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
XAI-AV: Explainable Artificial Intelligence for Trust Management in Autonomous Vehicles
Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信