无需验证应用软件的自治系统的正式保证

J. Stamenkovich, Lakshman Maalolan, C. Patterson
{"title":"无需验证应用软件的自治系统的正式保证","authors":"J. Stamenkovich, Lakshman Maalolan, C. Patterson","doi":"10.1109/REDUAS47371.2019.8999690","DOIUrl":null,"url":null,"abstract":"Our ability to ensure software correctness is especially challenged by autonomous systems. In particular, the use of artificial intelligence can cause unpredictable behavior when encountering situations that were not included in the training data. We describe an alternative to static analysis and conventional testing that monitors and enforces formally specified properties describing a system’s physical state. All external inputs and outputs are monitored by multiple parallel automata synthesized from guards specified as linear temporal logic (LTL) formulas capturing application-specific correctness, safety, and liveness properties. Unlike conventional runtime verification, adding guards does not impact application software performance since the monitor automata are implemented in configurable hardware. In order to remove all dependencies on software, input/output controllers and drivers may also be implemented in configurable hardware. A reporting or corrective action may be taken when a guard is triggered. This architecture is consistent with the guidance prescribed in ASTM F3269-17, Methods to Safely Bound Behavior of Unmanned Aircraft Systems Containing Complex Functions. The monitor and input/output subsystem’s minimal and isolated implementations are amenable to model checking since all components are independent finite state machines. Because this approach makes no assumptions about the root cause of deviation from specifications, it can detect and mitigate: malware threats; sensor and network attacks; software bugs; sensor, actuator and communication faults; and inadvertent or malicious operator errors. We demonstrate this approach with rules defining a virtual cage for a commercially available drone.","PeriodicalId":351115,"journal":{"name":"2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Formal Assurances for Autonomous Systems Without Verifying Application Software\",\"authors\":\"J. Stamenkovich, Lakshman Maalolan, C. Patterson\",\"doi\":\"10.1109/REDUAS47371.2019.8999690\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Our ability to ensure software correctness is especially challenged by autonomous systems. In particular, the use of artificial intelligence can cause unpredictable behavior when encountering situations that were not included in the training data. We describe an alternative to static analysis and conventional testing that monitors and enforces formally specified properties describing a system’s physical state. All external inputs and outputs are monitored by multiple parallel automata synthesized from guards specified as linear temporal logic (LTL) formulas capturing application-specific correctness, safety, and liveness properties. Unlike conventional runtime verification, adding guards does not impact application software performance since the monitor automata are implemented in configurable hardware. In order to remove all dependencies on software, input/output controllers and drivers may also be implemented in configurable hardware. A reporting or corrective action may be taken when a guard is triggered. This architecture is consistent with the guidance prescribed in ASTM F3269-17, Methods to Safely Bound Behavior of Unmanned Aircraft Systems Containing Complex Functions. The monitor and input/output subsystem’s minimal and isolated implementations are amenable to model checking since all components are independent finite state machines. Because this approach makes no assumptions about the root cause of deviation from specifications, it can detect and mitigate: malware threats; sensor and network attacks; software bugs; sensor, actuator and communication faults; and inadvertent or malicious operator errors. We demonstrate this approach with rules defining a virtual cage for a commercially available drone.\",\"PeriodicalId\":351115,\"journal\":{\"name\":\"2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/REDUAS47371.2019.8999690\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/REDUAS47371.2019.8999690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

我们确保软件正确性的能力尤其受到自主系统的挑战。特别是,当遇到未包含在训练数据中的情况时,人工智能的使用可能会导致不可预测的行为。我们描述了静态分析和常规测试的替代方案,它监视并执行描述系统物理状态的正式指定属性。所有外部输入和输出都由多个并行自动机监视,这些自动机由指定为线性时间逻辑(LTL)公式的守卫合成,这些守卫捕获特定于应用程序的正确性、安全性和活动性属性。与传统的运行时验证不同,添加保护不会影响应用程序软件的性能,因为监视器自动机是在可配置硬件中实现的。为了消除对软件的所有依赖,输入/输出控制器和驱动程序也可以在可配置硬件中实现。当触发保护时,可以采取报告或纠正措施。该架构符合ASTM F3269-17《包含复杂功能的无人机系统的安全约束行为方法》中规定的指南。监视器和输入/输出子系统的最小和隔离实现适合于模型检查,因为所有组件都是独立的有限状态机。由于这种方法对偏离规范的根本原因不做任何假设,因此它可以检测和减轻:恶意软件威胁;传感器和网络攻击;软件缺陷;传感器、执行器和通信故障;以及无意或恶意的操作失误。我们用为商用无人机定义虚拟笼子的规则来演示这种方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Formal Assurances for Autonomous Systems Without Verifying Application Software
Our ability to ensure software correctness is especially challenged by autonomous systems. In particular, the use of artificial intelligence can cause unpredictable behavior when encountering situations that were not included in the training data. We describe an alternative to static analysis and conventional testing that monitors and enforces formally specified properties describing a system’s physical state. All external inputs and outputs are monitored by multiple parallel automata synthesized from guards specified as linear temporal logic (LTL) formulas capturing application-specific correctness, safety, and liveness properties. Unlike conventional runtime verification, adding guards does not impact application software performance since the monitor automata are implemented in configurable hardware. In order to remove all dependencies on software, input/output controllers and drivers may also be implemented in configurable hardware. A reporting or corrective action may be taken when a guard is triggered. This architecture is consistent with the guidance prescribed in ASTM F3269-17, Methods to Safely Bound Behavior of Unmanned Aircraft Systems Containing Complex Functions. The monitor and input/output subsystem’s minimal and isolated implementations are amenable to model checking since all components are independent finite state machines. Because this approach makes no assumptions about the root cause of deviation from specifications, it can detect and mitigate: malware threats; sensor and network attacks; software bugs; sensor, actuator and communication faults; and inadvertent or malicious operator errors. We demonstrate this approach with rules defining a virtual cage for a commercially available drone.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信