基于学习的信息物理系统中分布移位的可解释检测

Yahan Yang, R. Kaur, Souradeep Dutta, Insup Lee
{"title":"基于学习的信息物理系统中分布移位的可解释检测","authors":"Yahan Yang, R. Kaur, Souradeep Dutta, Insup Lee","doi":"10.1109/iccps54341.2022.00027","DOIUrl":null,"url":null,"abstract":"The use of learning based components in cyber-physical systems (CPS) has created a gamut of possible avenues to use high dimensional real world signals generated from sensors like camera and LiDAR. The ability to process such signals can be largely attributed to the adoption of high-capacity function approximators like deep neural networks. However, this does not come without its potential perils. The pitfalls arise from possible over-fitting, and subsequent unsafe behavior when exposed to unknown environments. One challenge is that, in high dimensional input spaces it is almost impossible to experience enough training data in the design phase. What is required here, is an efficient way to flag out-of-distribution (OOD) samples that is precise enough to not raise too many false alarms. In addition, the system needs to be able to detect these in a computationally efficient manner at runtime. In this paper, our proposal is to build good representations for in-distribution data. We introduce the idea of a memory bank to store prototypical samples from the input space. We use these memories to compute probability density estimates using kernel density estimation techniques. We evaluate our technique on two challenging scenarios : a self-driving car setting implemented inside the simulator CARLA with image inputs, and an autonomous racing car navigation setting, with LiDAR inputs. In both settings, it was observed that a deviation from indistribution setting can potentially lead to deviation from safe behavior. An added benefit of using training samples as memories to detect out-of-distribution inputs is that the system is interpretable to a human operator. Explanation of this nature is generally hard to obtain from pure deep learning based alter-natives. Our code for reproducing the experiments is available at https://github.com/yangy96/interpretable_ood_detection.git","PeriodicalId":340078,"journal":{"name":"2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Interpretable Detection of Distribution Shifts in Learning Enabled Cyber-Physical Systems\",\"authors\":\"Yahan Yang, R. Kaur, Souradeep Dutta, Insup Lee\",\"doi\":\"10.1109/iccps54341.2022.00027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of learning based components in cyber-physical systems (CPS) has created a gamut of possible avenues to use high dimensional real world signals generated from sensors like camera and LiDAR. The ability to process such signals can be largely attributed to the adoption of high-capacity function approximators like deep neural networks. However, this does not come without its potential perils. The pitfalls arise from possible over-fitting, and subsequent unsafe behavior when exposed to unknown environments. One challenge is that, in high dimensional input spaces it is almost impossible to experience enough training data in the design phase. What is required here, is an efficient way to flag out-of-distribution (OOD) samples that is precise enough to not raise too many false alarms. In addition, the system needs to be able to detect these in a computationally efficient manner at runtime. In this paper, our proposal is to build good representations for in-distribution data. We introduce the idea of a memory bank to store prototypical samples from the input space. We use these memories to compute probability density estimates using kernel density estimation techniques. We evaluate our technique on two challenging scenarios : a self-driving car setting implemented inside the simulator CARLA with image inputs, and an autonomous racing car navigation setting, with LiDAR inputs. In both settings, it was observed that a deviation from indistribution setting can potentially lead to deviation from safe behavior. An added benefit of using training samples as memories to detect out-of-distribution inputs is that the system is interpretable to a human operator. Explanation of this nature is generally hard to obtain from pure deep learning based alter-natives. Our code for reproducing the experiments is available at https://github.com/yangy96/interpretable_ood_detection.git\",\"PeriodicalId\":340078,\"journal\":{\"name\":\"2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iccps54341.2022.00027\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccps54341.2022.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

在网络物理系统(CPS)中使用基于学习的组件,为使用摄像头和激光雷达等传感器产生的高维真实世界信号创造了一系列可能的途径。处理此类信号的能力在很大程度上可以归因于采用高容量函数近似器,如深度神经网络。然而,这并非没有潜在的危险。陷阱产生于可能的过度拟合,以及暴露于未知环境时随后的不安全行为。一个挑战是,在高维输入空间中,在设计阶段几乎不可能体验到足够的训练数据。这里需要的是一种有效的方法来标记未分发(OOD)样本,这种方法要足够精确,不会引起太多的假警报。此外,系统需要能够在运行时以计算效率高的方式检测这些问题。在本文中,我们的建议是为分布中数据建立良好的表示。我们引入了记忆库的概念来存储来自输入空间的原型样本。我们使用这些内存使用核密度估计技术来计算概率密度估计。我们在两个具有挑战性的场景中评估了我们的技术:在模拟器CARLA中实现的带有图像输入的自动驾驶汽车设置,以及带有激光雷达输入的自动驾驶赛车导航设置。在这两种情况下,观察到偏离分布设置可能会导致偏离安全行为。使用训练样本作为记忆来检测非分布输入的另一个好处是,系统对人类操作员来说是可解释的。这种性质的解释通常很难从纯粹的基于深度学习的替代方案中获得。我们复制实验的代码可以在https://github.com/yangy96/interpretable_ood_detection.git上找到
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Interpretable Detection of Distribution Shifts in Learning Enabled Cyber-Physical Systems
The use of learning based components in cyber-physical systems (CPS) has created a gamut of possible avenues to use high dimensional real world signals generated from sensors like camera and LiDAR. The ability to process such signals can be largely attributed to the adoption of high-capacity function approximators like deep neural networks. However, this does not come without its potential perils. The pitfalls arise from possible over-fitting, and subsequent unsafe behavior when exposed to unknown environments. One challenge is that, in high dimensional input spaces it is almost impossible to experience enough training data in the design phase. What is required here, is an efficient way to flag out-of-distribution (OOD) samples that is precise enough to not raise too many false alarms. In addition, the system needs to be able to detect these in a computationally efficient manner at runtime. In this paper, our proposal is to build good representations for in-distribution data. We introduce the idea of a memory bank to store prototypical samples from the input space. We use these memories to compute probability density estimates using kernel density estimation techniques. We evaluate our technique on two challenging scenarios : a self-driving car setting implemented inside the simulator CARLA with image inputs, and an autonomous racing car navigation setting, with LiDAR inputs. In both settings, it was observed that a deviation from indistribution setting can potentially lead to deviation from safe behavior. An added benefit of using training samples as memories to detect out-of-distribution inputs is that the system is interpretable to a human operator. Explanation of this nature is generally hard to obtain from pure deep learning based alter-natives. Our code for reproducing the experiments is available at https://github.com/yangy96/interpretable_ood_detection.git
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信