On Using Real-Time Reachability for the Safety Assurance of Machine Learning Controllers

Patrick Musau, Nathaniel P. Hamilton, Diego Manzanas Lopez, Preston K. Robinette, Taylor T. Johnson
{"title":"On Using Real-Time Reachability for the Safety Assurance of Machine Learning Controllers","authors":"Patrick Musau, Nathaniel P. Hamilton, Diego Manzanas Lopez, Preston K. Robinette, Taylor T. Johnson","doi":"10.1109/ICAA52185.2022.00010","DOIUrl":null,"url":null,"abstract":"Over the last decade, advances in machine learning and sensing technology have paved the way for the belief that safe, accessible, and convenient autonomous vehicles may be realized in the near future. Despite the prolific competencies of machine learning models for learning the nuances of sensing, actuation, and control, they are notoriously difficult to assure. The challenge here is that some models, such as neural networks, are “black box” in nature, making verification and validation difficult, and sometimes infeasible. Moreover, these models are often tasked with operating in uncertain and dynamic environments where design time assurance may only be partially transferable. Thus, it is critical to monitor these components at runtime. One approach for providing runtime assurance of systems with unverified components is the simplex architecture, where an unverified component is wrapped with a safety controller and a switching logic designed to prevent dangerous behavior. In this paper, we propose the use of a real-time reachability algorithm for the implementation of such an architecture for the safety assurance of a 1/10 scale open source autonomous vehicle platform known as F1/10. The reachability algorithm (a) provides provable guarantees of safety, and (b) is used to detect potentially unsafe scenarios. In our approach, the need to analyze the underlying controller is abstracted away, instead focusing on the effects of the controller’s decisions on the system’s future states. We demonstrate the efficacy of our architecture through experiments conducted both in simulation and on an embedded hardware platform.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Assured Autonomy (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA52185.2022.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Over the last decade, advances in machine learning and sensing technology have paved the way for the belief that safe, accessible, and convenient autonomous vehicles may be realized in the near future. Despite the prolific competencies of machine learning models for learning the nuances of sensing, actuation, and control, they are notoriously difficult to assure. The challenge here is that some models, such as neural networks, are “black box” in nature, making verification and validation difficult, and sometimes infeasible. Moreover, these models are often tasked with operating in uncertain and dynamic environments where design time assurance may only be partially transferable. Thus, it is critical to monitor these components at runtime. One approach for providing runtime assurance of systems with unverified components is the simplex architecture, where an unverified component is wrapped with a safety controller and a switching logic designed to prevent dangerous behavior. In this paper, we propose the use of a real-time reachability algorithm for the implementation of such an architecture for the safety assurance of a 1/10 scale open source autonomous vehicle platform known as F1/10. The reachability algorithm (a) provides provable guarantees of safety, and (b) is used to detect potentially unsafe scenarios. In our approach, the need to analyze the underlying controller is abstracted away, instead focusing on the effects of the controller’s decisions on the system’s future states. We demonstrate the efficacy of our architecture through experiments conducted both in simulation and on an embedded hardware platform.
利用实时可达性保证机器学习控制器的安全
在过去的十年里,机器学习和传感技术的进步为人们相信在不久的将来可以实现安全、方便和方便的自动驾驶汽车铺平了道路。尽管机器学习模型在学习感知、驱动和控制的细微差别方面具有丰富的能力,但众所周知,它们很难保证。这里的挑战是,一些模型,如神经网络,本质上是“黑箱”,使得验证和确认变得困难,有时甚至是不可行的。此外,这些模型的任务通常是在不确定和动态的环境中运行,在这些环境中,设计时间保证可能只是部分可转移的。因此,在运行时监视这些组件是至关重要的。为具有未经验证的组件的系统提供运行时保证的一种方法是simplex体系结构,在这种体系结构中,未经验证的组件被一个安全控制器和一个旨在防止危险行为的切换逻辑包裹起来。在本文中,我们建议使用实时可达性算法来实现这种架构,以保证1/10规模的开源自动驾驶汽车平台F1/10的安全。可达性算法(a)提供可证明的安全保证,(b)用于检测潜在的不安全场景。在我们的方法中,分析底层控制器的需要被抽象掉了,取而代之的是关注控制器的决策对系统未来状态的影响。我们通过在仿真和嵌入式硬件平台上进行的实验证明了我们的架构的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信