自动驾驶汽车验证的挑战:主题演讲摘要

P. Koopman
{"title":"自动驾驶汽车验证的挑战:主题演讲摘要","authors":"P. Koopman","doi":"10.1145/3055378.3055379","DOIUrl":null,"url":null,"abstract":"Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a \"safe\" envelope knows that it is safe and can operate with full autonomy. A system operating within an \"unsafe\" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between \"safe\" and \"unsafe\" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing \"safe\" envelopes, shrinking \"unsafe\" envelopes, and eliminating any gap areas.","PeriodicalId":346760,"journal":{"name":"Proceedings of the 1st International Workshop on Safe Control of Connected and Autonomous Vehicles","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract\",\"authors\":\"P. Koopman\",\"doi\":\"10.1145/3055378.3055379\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a \\\"safe\\\" envelope knows that it is safe and can operate with full autonomy. A system operating within an \\\"unsafe\\\" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between \\\"safe\\\" and \\\"unsafe\\\" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing \\\"safe\\\" envelopes, shrinking \\\"unsafe\\\" envelopes, and eliminating any gap areas.\",\"PeriodicalId\":346760,\"journal\":{\"name\":\"Proceedings of the 1st International Workshop on Safe Control of Connected and Autonomous Vehicles\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st International Workshop on Safe Control of Connected and Autonomous Vehicles\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3055378.3055379\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Safe Control of Connected and Autonomous Vehicles","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3055378.3055379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

自主系统的开发人员在符合既定的安全验证方法方面面临着独特的挑战。众所周知,仅仅进行测试是不足以保证安全性的,因为进行足够长的测试以建立超可靠性通常是不切实际的。这就是为什么软件安全标准强调高质量的开发过程。然后测试验证流程执行,而不是直接验证可靠性。在将传统安全流程应用于自动驾驶汽车时,出现了两个重大挑战。首先,简单地收集一组完整的系统需求是很困难的,因为可能的场景和故障的组合非常多。其次,自治系统通常使用机器学习(ML),使系统的需求和设计不透明。在训练之后,我们通常知道ML组件将对它所看到的输入做什么,但通常不知道它将对至少一些其他输入做什么,直到我们尝试它们。这两个问题都使跟踪需求和测试设计变得困难,这是执行安全验证过程所必需的。换句话说,我们正在构建由于不完整甚至未知的需求和设计而无法验证的系统。适应使必须被验证的系统成为一个移动的目标,从而使问题变得更糟。通常情况下,使用传统的安全设计流程来验证自治系统的所有可能的自适应状态是不切实际的。一种可以帮助解决需求、设计和适应问题的方法是基于安全论证,而不是基于自治功能本身的正确性,而是基于对一组安全信封的一致性。每个安全包络描述了自治系统运行状态空间内的边界。在“安全”范围内运行的系统知道它是安全的,可以完全自主地运行。在“不安全”信封内运行的系统知道它是不安全的,并且必须调用故障安全操作。多个部分规范可以用作包络集,安全包络的交集允许完全自治,而不安全包络的联合则引发经过验证的、可能复杂的故障安全响应。包络机制可以使用传统的软件工程技术来实现,从而减少需求、设计和适应性方面的问题,否则这些问题将阻碍安全验证。信封方法不是试图证明自治总是能够正确工作(这仍然是提高可用性的一个有价值的目标),而是度量一个或多个自治组件的行为,以确定结果是否安全。虽然这并不一定是一件容易的事情,但我们有理由相信,检查自动驾驶行为的安全性比实现完美、优化的自动驾驶行为更容易。这种包络方法可用于在开发过程中检测故障,并触发车队车辆中的故障保护。在信封定义的简单性和容忍度之间不可避免地会存在紧张关系,更容忍度的信封定义可能更复杂。在“安全”和“不安全”之间的空白区域运行需要人工监督,因为自主系统不能确定它是安全的。从部分自治到完全自治的一种方法是,随着时间的推移,系统可以通过定义和扩大“安全”信封,缩小“不安全”信封以及消除任何间隙区域来增加容忍度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract
Developers of autonomous systems face distinct challenges in conforming to established methods of validating safety. It is well known that testing alone is insufficient to assure safety, because testing long enough to establish ultra-dependability is generally impractical. Thatfis why software safety standards emphasize high quality development processes. Testing then validates process execution rather than directly validating dependability. Two significant challenges arise in applying traditional safety processes to autonomous vehicles. First, simply gathering a complete set of system requirements is difficult because of the sheer number of combinations of possible scenarios and faults. Second, autonomy systems commonly use machine learning (ML) in a way that makes the requirements and design of the system opaque. After training, usually we know what an ML component will do for an input it has seen, but generally not what it will do for at least some other inputs until we try them. Both of these issues make it difficult to trace requirements and designs to testing as is required for executing a safety validation process. In other words, we are building systems that can not be validated due to incomplete or even unknown requirements and designs. Adaptation makes the problem even worse by making the system that must be validated a moving target. In the general case, it is impractical to validate all the possible adaptation states of an autonomy system using traditional safety design processes. An approach that can help with the requirements, design, and adaptation problems is basing a safety argument not on correctness of the autonomy functionality itself, but rather on conformance to a set of safety envelopes. Each safety envelope describes a boundary within the operational state space of the autonomy system. A system operating within a "safe" envelope knows that it is safe and can operate with full autonomy. A system operating within an "unsafe" envelope knows that it is unsafe, and must invoke a failsafe action. Multiple partial specifications can be used as an envelope set, with the intersection of safe envelopes permitting full autonomy, and the union of unsafe envelopes provoking validated, and potentially complex, failsafe responses. Envelope mechanisms can be implemented using traditional software engineering techniques, reducing the problems with requirements, design, and adaptation that would otherwise impede safety validation. Rather than attempting to prove that autonomy will always work correctly (which is still a valuable goal to improve availability), the envelope approach measures the behavior of one or more autonomous components to determine if the result is safe. While this is not necessarily an easy thing to do, there is reason to believe that checking autonomy behaviors for safety is easier than implementing perfect, optimized autonomy actions. This envelope approach might be used to detect faults during development and to trigger failsafes in fleet vehicles. Inevitably there will be tension between simplicity of the envelope definitions and permissiveness, with more permissive envelope definitions likely being more complex. Operating in the gap areas between "safe" and "unsafe" requires human supervision, because the autonomy system can not be sure it is safe. One way to look at the progression from partial to full autonomy is that, over time, systems can increase permissiveness by defining and growing "safe" envelopes, shrinking "unsafe" envelopes, and eliminating any gap areas.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信