保证自治的验证支持论据的危害分析

K. Wasson, A. Hocking, Jonathan C. Rowanhill
{"title":"保证自治的验证支持论据的危害分析","authors":"K. Wasson, A. Hocking, Jonathan C. Rowanhill","doi":"10.1109/DASC50938.2020.9256762","DOIUrl":null,"url":null,"abstract":"The kinds of systems we are building, and the ways we are building them, are evolving. This evolution is invalidating analyses and assumptions upon which we have relied as bases for design assurance, imposing a need for new criteria and means of compliance for many autonomy-enabling technologies. While significant investigation activity into assurance bases for these technologies is underway across research, development, and standards bodies, the community will need to make sense of results coming out. We require evaluation frameworks and decision support to establish trust in, and guide selection of, new verification concepts and methods. In this work, we propose a lens for the evaluation of verification methods in development to ground new criteria, standards, and means of compliance for assuring and approving adaptive and intelligent systems. We root the evaluation framework in examining verification as a system in its own right, with a job to do and ways it can fail to do it. We then outline a structured argument in which it can be concluded a verification method is fit for purpose if it meets its requirements and the hazards of its use are adequately mitigated. To identify these hazards, we illustrate how industry-standard hazard analysis can be performed on verification itself, and how the results of such an analysis can be integrated into structured arguments supporting stakeholder communication and decision making. Finally, we note environments where we are beginning to use this approach, including to provide feedback within standards development activity.","PeriodicalId":112045,"journal":{"name":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hazard Analysis of Verification Supporting Arguments for Assured Autonomy\",\"authors\":\"K. Wasson, A. Hocking, Jonathan C. Rowanhill\",\"doi\":\"10.1109/DASC50938.2020.9256762\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The kinds of systems we are building, and the ways we are building them, are evolving. This evolution is invalidating analyses and assumptions upon which we have relied as bases for design assurance, imposing a need for new criteria and means of compliance for many autonomy-enabling technologies. While significant investigation activity into assurance bases for these technologies is underway across research, development, and standards bodies, the community will need to make sense of results coming out. We require evaluation frameworks and decision support to establish trust in, and guide selection of, new verification concepts and methods. In this work, we propose a lens for the evaluation of verification methods in development to ground new criteria, standards, and means of compliance for assuring and approving adaptive and intelligent systems. We root the evaluation framework in examining verification as a system in its own right, with a job to do and ways it can fail to do it. We then outline a structured argument in which it can be concluded a verification method is fit for purpose if it meets its requirements and the hazards of its use are adequately mitigated. To identify these hazards, we illustrate how industry-standard hazard analysis can be performed on verification itself, and how the results of such an analysis can be integrated into structured arguments supporting stakeholder communication and decision making. Finally, we note environments where we are beginning to use this approach, including to provide feedback within standards development activity.\",\"PeriodicalId\":112045,\"journal\":{\"name\":\"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)\",\"volume\":\"117 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DASC50938.2020.9256762\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASC50938.2020.9256762","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们正在构建的系统种类,以及我们构建它们的方式,都在不断发展。这种演变正在使我们作为设计保证基础的分析和假设失效,迫使我们需要新的标准和符合许多自动驾驶技术的方法。虽然研究、开发和标准机构正在对这些技术的保证基础进行重要的调查活动,但社区将需要弄清楚结果的意义。我们需要评估框架和决策支持来建立信任,并指导选择新的验证概念和方法。在这项工作中,我们提出了一个用于评估开发中的验证方法的镜头,以建立新的标准,标准和合规性手段,以确保和批准自适应和智能系统。我们将评估框架根植于将验证作为一个系统来检查,它有自己的工作要做,也有可能失败。然后,我们概述了一个有条理的论点,其中可以得出结论,如果一种核查方法满足其要求,并且其使用的危害得到充分减轻,则该方法适合于目的。为了识别这些危害,我们说明了如何在验证本身上执行工业标准危害分析,以及如何将这种分析的结果集成到支持涉众沟通和决策制定的结构化论证中。最后,我们注意到我们开始使用这种方法的环境,包括在标准开发活动中提供反馈。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hazard Analysis of Verification Supporting Arguments for Assured Autonomy
The kinds of systems we are building, and the ways we are building them, are evolving. This evolution is invalidating analyses and assumptions upon which we have relied as bases for design assurance, imposing a need for new criteria and means of compliance for many autonomy-enabling technologies. While significant investigation activity into assurance bases for these technologies is underway across research, development, and standards bodies, the community will need to make sense of results coming out. We require evaluation frameworks and decision support to establish trust in, and guide selection of, new verification concepts and methods. In this work, we propose a lens for the evaluation of verification methods in development to ground new criteria, standards, and means of compliance for assuring and approving adaptive and intelligent systems. We root the evaluation framework in examining verification as a system in its own right, with a job to do and ways it can fail to do it. We then outline a structured argument in which it can be concluded a verification method is fit for purpose if it meets its requirements and the hazards of its use are adequately mitigated. To identify these hazards, we illustrate how industry-standard hazard analysis can be performed on verification itself, and how the results of such an analysis can be integrated into structured arguments supporting stakeholder communication and decision making. Finally, we note environments where we are beginning to use this approach, including to provide feedback within standards development activity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信