{"title":"保证自治的验证支持论据的危害分析","authors":"K. Wasson, A. Hocking, Jonathan C. Rowanhill","doi":"10.1109/DASC50938.2020.9256762","DOIUrl":null,"url":null,"abstract":"The kinds of systems we are building, and the ways we are building them, are evolving. This evolution is invalidating analyses and assumptions upon which we have relied as bases for design assurance, imposing a need for new criteria and means of compliance for many autonomy-enabling technologies. While significant investigation activity into assurance bases for these technologies is underway across research, development, and standards bodies, the community will need to make sense of results coming out. We require evaluation frameworks and decision support to establish trust in, and guide selection of, new verification concepts and methods. In this work, we propose a lens for the evaluation of verification methods in development to ground new criteria, standards, and means of compliance for assuring and approving adaptive and intelligent systems. We root the evaluation framework in examining verification as a system in its own right, with a job to do and ways it can fail to do it. We then outline a structured argument in which it can be concluded a verification method is fit for purpose if it meets its requirements and the hazards of its use are adequately mitigated. To identify these hazards, we illustrate how industry-standard hazard analysis can be performed on verification itself, and how the results of such an analysis can be integrated into structured arguments supporting stakeholder communication and decision making. Finally, we note environments where we are beginning to use this approach, including to provide feedback within standards development activity.","PeriodicalId":112045,"journal":{"name":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hazard Analysis of Verification Supporting Arguments for Assured Autonomy\",\"authors\":\"K. Wasson, A. Hocking, Jonathan C. Rowanhill\",\"doi\":\"10.1109/DASC50938.2020.9256762\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The kinds of systems we are building, and the ways we are building them, are evolving. This evolution is invalidating analyses and assumptions upon which we have relied as bases for design assurance, imposing a need for new criteria and means of compliance for many autonomy-enabling technologies. While significant investigation activity into assurance bases for these technologies is underway across research, development, and standards bodies, the community will need to make sense of results coming out. We require evaluation frameworks and decision support to establish trust in, and guide selection of, new verification concepts and methods. In this work, we propose a lens for the evaluation of verification methods in development to ground new criteria, standards, and means of compliance for assuring and approving adaptive and intelligent systems. We root the evaluation framework in examining verification as a system in its own right, with a job to do and ways it can fail to do it. We then outline a structured argument in which it can be concluded a verification method is fit for purpose if it meets its requirements and the hazards of its use are adequately mitigated. To identify these hazards, we illustrate how industry-standard hazard analysis can be performed on verification itself, and how the results of such an analysis can be integrated into structured arguments supporting stakeholder communication and decision making. Finally, we note environments where we are beginning to use this approach, including to provide feedback within standards development activity.\",\"PeriodicalId\":112045,\"journal\":{\"name\":\"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)\",\"volume\":\"117 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DASC50938.2020.9256762\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASC50938.2020.9256762","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hazard Analysis of Verification Supporting Arguments for Assured Autonomy
The kinds of systems we are building, and the ways we are building them, are evolving. This evolution is invalidating analyses and assumptions upon which we have relied as bases for design assurance, imposing a need for new criteria and means of compliance for many autonomy-enabling technologies. While significant investigation activity into assurance bases for these technologies is underway across research, development, and standards bodies, the community will need to make sense of results coming out. We require evaluation frameworks and decision support to establish trust in, and guide selection of, new verification concepts and methods. In this work, we propose a lens for the evaluation of verification methods in development to ground new criteria, standards, and means of compliance for assuring and approving adaptive and intelligent systems. We root the evaluation framework in examining verification as a system in its own right, with a job to do and ways it can fail to do it. We then outline a structured argument in which it can be concluded a verification method is fit for purpose if it meets its requirements and the hazards of its use are adequately mitigated. To identify these hazards, we illustrate how industry-standard hazard analysis can be performed on verification itself, and how the results of such an analysis can be integrated into structured arguments supporting stakeholder communication and decision making. Finally, we note environments where we are beginning to use this approach, including to provide feedback within standards development activity.