Nandith Narayan, Parth Ganeriwala, Randolph M. Jones, M. Matessa, S. Bhattacharyya, Jennifer Davis, Hemant Purohit, Simone Fulvio Rollini
{"title":"Assuring Learning-Enabled Increasingly Autonomous Systems*","authors":"Nandith Narayan, Parth Ganeriwala, Randolph M. Jones, M. Matessa, S. Bhattacharyya, Jennifer Davis, Hemant Purohit, Simone Fulvio Rollini","doi":"10.1109/SysCon53073.2023.10131227","DOIUrl":null,"url":null,"abstract":"Autonomous agents are expected to intelligently handle emerging situations with appropriate interaction with humans, while executing the operations. This is possible today with the integration of advanced technologies, such as machine learning, but these complex algorithms pose a challenge to verification and thus the eventual certification of the autonomous agent. In the discussed approach, we illustrate how safety properties for a learning-enabled increasingly autonomous agent can be formally verified early in the design phase. We demonstrate this methodology by designing a learning-enabled increasingly autonomous agent in a cognitive architecture, Soar. The agent includes symbolic decision logic with numeric decision preferences that are tuned by reinforcement learning to produce post-learning decision knowledge. The agent is then automatically translated into nuXmv, and properties are verified over the agent.","PeriodicalId":169296,"journal":{"name":"2023 IEEE International Systems Conference (SysCon)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Systems Conference (SysCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SysCon53073.2023.10131227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Autonomous agents are expected to intelligently handle emerging situations with appropriate interaction with humans, while executing the operations. This is possible today with the integration of advanced technologies, such as machine learning, but these complex algorithms pose a challenge to verification and thus the eventual certification of the autonomous agent. In the discussed approach, we illustrate how safety properties for a learning-enabled increasingly autonomous agent can be formally verified early in the design phase. We demonstrate this methodology by designing a learning-enabled increasingly autonomous agent in a cognitive architecture, Soar. The agent includes symbolic decision logic with numeric decision preferences that are tuned by reinforcement learning to produce post-learning decision knowledge. The agent is then automatically translated into nuXmv, and properties are verified over the agent.