{"title":"Explainable Goal Recognition: A Framework Based on Weight of Evidence","authors":"Abeer Alshehri, Tim Miller, Mor Vered","doi":"10.48550/arXiv.2303.05622","DOIUrl":null,"url":null,"abstract":"We introduce and evaluate an eXplainable goal recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems. Our model provides human-centered explanations that answer `why?' and `why not?' questions. We computationally evaluate the performance of our system over eight different goal recognition domains showing it does not significantly increase the underlying recognition run time. Using a human behavioral study to obtain the ground truth from human annotators, we further show that the XGR model can successfully generate human-like explanations. We then report on a study with 40 participants who observe agents playing a Sokoban game and then receive explanations of the goal recognition output. We investigated participants’ understanding obtained by explanations through task prediction, explanation satisfaction, and trust.","PeriodicalId":239898,"journal":{"name":"International Conference on Automated Planning and Scheduling","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Automated Planning and Scheduling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2303.05622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We introduce and evaluate an eXplainable goal recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems. Our model provides human-centered explanations that answer `why?' and `why not?' questions. We computationally evaluate the performance of our system over eight different goal recognition domains showing it does not significantly increase the underlying recognition run time. Using a human behavioral study to obtain the ground truth from human annotators, we further show that the XGR model can successfully generate human-like explanations. We then report on a study with 40 participants who observe agents playing a Sokoban game and then receive explanations of the goal recognition output. We investigated participants’ understanding obtained by explanations through task prediction, explanation satisfaction, and trust.