{"title":"Safe Explainable Agents for Autonomous Navigation using Evolving Behavior Trees","authors":"Nicholas Potteiger, X. Koutsoukos","doi":"10.1109/ICAA58325.2023.00014","DOIUrl":null,"url":null,"abstract":"Machine learning and reinforcement learning are increasingly used to solve complex tasks in autonomous systems. However, autonomous agents represented by large neural networks are not transparent leading to their assurability and trustworthiness becoming critical challenges. Large models also result in a lack of interpretability which causes severe obstacles related to trust in autonomous agents and human-machine teaming. In this paper, we leverage the hierarchical structure of behavior trees and hierarchical reinforcement learning to develop a neurosymbolic model architecture for autonomous agents. The proposed model, referred to as Evolving Behavior Trees (EBTs), integrates the required components to represent the learning tasks as well as the switching between tasks to achieve complex long-term goals. We design an agent for autonomous navigation and we evaluate the approach against a state-of-the-art hierarchical reinforcement learning method using a Maze Simulation Environment. The results show autonomous agents represented by EBTs can be trained efficiently. The approach incorporates explicit safety constraints into the model and incurs significantly fewer safety violations during training and execution. Further, the model provides explanations for the behavior of the autonomous agent by associating the state of the executing EBT with agent actions.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Assured Autonomy (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA58325.2023.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Machine learning and reinforcement learning are increasingly used to solve complex tasks in autonomous systems. However, autonomous agents represented by large neural networks are not transparent leading to their assurability and trustworthiness becoming critical challenges. Large models also result in a lack of interpretability which causes severe obstacles related to trust in autonomous agents and human-machine teaming. In this paper, we leverage the hierarchical structure of behavior trees and hierarchical reinforcement learning to develop a neurosymbolic model architecture for autonomous agents. The proposed model, referred to as Evolving Behavior Trees (EBTs), integrates the required components to represent the learning tasks as well as the switching between tasks to achieve complex long-term goals. We design an agent for autonomous navigation and we evaluate the approach against a state-of-the-art hierarchical reinforcement learning method using a Maze Simulation Environment. The results show autonomous agents represented by EBTs can be trained efficiently. The approach incorporates explicit safety constraints into the model and incurs significantly fewer safety violations during training and execution. Further, the model provides explanations for the behavior of the autonomous agent by associating the state of the executing EBT with agent actions.