{"title":"Formal Assurances for Autonomous Systems Without Verifying Application Software","authors":"J. Stamenkovich, Lakshman Maalolan, C. Patterson","doi":"10.1109/REDUAS47371.2019.8999690","DOIUrl":null,"url":null,"abstract":"Our ability to ensure software correctness is especially challenged by autonomous systems. In particular, the use of artificial intelligence can cause unpredictable behavior when encountering situations that were not included in the training data. We describe an alternative to static analysis and conventional testing that monitors and enforces formally specified properties describing a system’s physical state. All external inputs and outputs are monitored by multiple parallel automata synthesized from guards specified as linear temporal logic (LTL) formulas capturing application-specific correctness, safety, and liveness properties. Unlike conventional runtime verification, adding guards does not impact application software performance since the monitor automata are implemented in configurable hardware. In order to remove all dependencies on software, input/output controllers and drivers may also be implemented in configurable hardware. A reporting or corrective action may be taken when a guard is triggered. This architecture is consistent with the guidance prescribed in ASTM F3269-17, Methods to Safely Bound Behavior of Unmanned Aircraft Systems Containing Complex Functions. The monitor and input/output subsystem’s minimal and isolated implementations are amenable to model checking since all components are independent finite state machines. Because this approach makes no assumptions about the root cause of deviation from specifications, it can detect and mitigate: malware threats; sensor and network attacks; software bugs; sensor, actuator and communication faults; and inadvertent or malicious operator errors. We demonstrate this approach with rules defining a virtual cage for a commercially available drone.","PeriodicalId":351115,"journal":{"name":"2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/REDUAS47371.2019.8999690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Our ability to ensure software correctness is especially challenged by autonomous systems. In particular, the use of artificial intelligence can cause unpredictable behavior when encountering situations that were not included in the training data. We describe an alternative to static analysis and conventional testing that monitors and enforces formally specified properties describing a system’s physical state. All external inputs and outputs are monitored by multiple parallel automata synthesized from guards specified as linear temporal logic (LTL) formulas capturing application-specific correctness, safety, and liveness properties. Unlike conventional runtime verification, adding guards does not impact application software performance since the monitor automata are implemented in configurable hardware. In order to remove all dependencies on software, input/output controllers and drivers may also be implemented in configurable hardware. A reporting or corrective action may be taken when a guard is triggered. This architecture is consistent with the guidance prescribed in ASTM F3269-17, Methods to Safely Bound Behavior of Unmanned Aircraft Systems Containing Complex Functions. The monitor and input/output subsystem’s minimal and isolated implementations are amenable to model checking since all components are independent finite state machines. Because this approach makes no assumptions about the root cause of deviation from specifications, it can detect and mitigate: malware threats; sensor and network attacks; software bugs; sensor, actuator and communication faults; and inadvertent or malicious operator errors. We demonstrate this approach with rules defining a virtual cage for a commercially available drone.