{"title":"A generic and decentralized approach to XAI for autonomic systems: application to the smart home","authors":"Étienne Houzé","doi":"10.1109/ACSOS-C52956.2021.00079","DOIUrl":null,"url":null,"abstract":"How can a smart home system generate explanations to its user on unusual or unwanted situations? Despite the rise of Explainable AI in recent years, there is still no satisfying solution to this problem. Most of the challenge lies in the fact that explanations are most needed when facing unusual or strange situations, which is where standard statistical methods are less effective. When faced with similar problems, humans rely on sequential reasoning, examining causally related conflicts and solving them one after the other. The approach explored by this PhD thesis is to implement this kind of reasoning into a Cyber-Physical System such as a smart home. To do so, a generic and modular architecture is designed to account for the specificity of smart home systems (runtime adaptation, variety of components, importance and uniqueness of the context). The aim of the thesis is to build the base framework of an Explanatory Engine and to provide a proof-of-concept demonstrator.","PeriodicalId":268224,"journal":{"name":"2021 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACSOS-C52956.2021.00079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
How can a smart home system generate explanations to its user on unusual or unwanted situations? Despite the rise of Explainable AI in recent years, there is still no satisfying solution to this problem. Most of the challenge lies in the fact that explanations are most needed when facing unusual or strange situations, which is where standard statistical methods are less effective. When faced with similar problems, humans rely on sequential reasoning, examining causally related conflicts and solving them one after the other. The approach explored by this PhD thesis is to implement this kind of reasoning into a Cyber-Physical System such as a smart home. To do so, a generic and modular architecture is designed to account for the specificity of smart home systems (runtime adaptation, variety of components, importance and uniqueness of the context). The aim of the thesis is to build the base framework of an Explanatory Engine and to provide a proof-of-concept demonstrator.