Leilani H. Gilpin, Vishnu Penubarthi, Lalana Kagal
{"title":"Explaining Multimodal Errors in Autonomous Vehicles","authors":"Leilani H. Gilpin, Vishnu Penubarthi, Lalana Kagal","doi":"10.1109/DSAA53316.2021.9564178","DOIUrl":null,"url":null,"abstract":"Complex machines, such as autonomous vehicles, are unable to reconcile conflicting behaviors between their underlying subsystems, which leads to accidents and other negative consequences. Existing approaches to error and anomaly detection are not equipped to detect and mitigate inconsistencies among parts. In this paper, we present “Anomaly Detection through Explanations” or ADE, a multimodal monitoring architecture to reconcile critical discrepancies under uncertainty. ADE uses symbolic explanations as a debugging language, by examining underlying reasons for those decisions. Further, when decisions conflict, our method uses a synthesizer, along with a priority hierarchy, to process subsystem outputs along with their underlying reasons and transparently judges the conflicts. We show the accuracy and performance of ADE on autonomous vehicle scenarios and data, and discuss other error evaluations for future work.","PeriodicalId":129612,"journal":{"name":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","volume":"46 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA53316.2021.9564178","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Complex machines, such as autonomous vehicles, are unable to reconcile conflicting behaviors between their underlying subsystems, which leads to accidents and other negative consequences. Existing approaches to error and anomaly detection are not equipped to detect and mitigate inconsistencies among parts. In this paper, we present “Anomaly Detection through Explanations” or ADE, a multimodal monitoring architecture to reconcile critical discrepancies under uncertainty. ADE uses symbolic explanations as a debugging language, by examining underlying reasons for those decisions. Further, when decisions conflict, our method uses a synthesizer, along with a priority hierarchy, to process subsystem outputs along with their underlying reasons and transparently judges the conflicts. We show the accuracy and performance of ADE on autonomous vehicle scenarios and data, and discuss other error evaluations for future work.