Matthäus Wander, V. Matkovic, Torben Weis, Michael Bischof, Lorenz Schwittmann
{"title":"迈向家庭自动化系统现象的可理解解释","authors":"Matthäus Wander, V. Matkovic, Torben Weis, Michael Bischof, Lorenz Schwittmann","doi":"10.1109/PERCOMW.2018.8480147","DOIUrl":null,"url":null,"abstract":"The current focus in home automation is on making these systems smart and easy to install. Following advances in the area of smart assistants like Alexa and Google Home, we assume that users will not only issue commands to their smart home. They will ask their smart home for explanations why something happened. Hence, we develop and evaluate an algorithm that can explain users why a certain observable phenomenon occured. These questions can originate in the complexity of smart home systems, i.e., the system did something unexpected and the users wonders what caused it. Furthermore, users might ask the system about phenomena caused by their roommates. To evaluate our prototype, we analyze the difference between answers given by humans and those generated by our prototype. Therefore, we conducted an Amazon Mechanical Turk-based Turing Test. In four out of six scenarios our prototype passed the Turing Test. In one of them the computer answer appeared even more human than the real human one.","PeriodicalId":190096,"journal":{"name":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Comprehensible Explanations of Phenomena in Home Automation Systems\",\"authors\":\"Matthäus Wander, V. Matkovic, Torben Weis, Michael Bischof, Lorenz Schwittmann\",\"doi\":\"10.1109/PERCOMW.2018.8480147\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current focus in home automation is on making these systems smart and easy to install. Following advances in the area of smart assistants like Alexa and Google Home, we assume that users will not only issue commands to their smart home. They will ask their smart home for explanations why something happened. Hence, we develop and evaluate an algorithm that can explain users why a certain observable phenomenon occured. These questions can originate in the complexity of smart home systems, i.e., the system did something unexpected and the users wonders what caused it. Furthermore, users might ask the system about phenomena caused by their roommates. To evaluate our prototype, we analyze the difference between answers given by humans and those generated by our prototype. Therefore, we conducted an Amazon Mechanical Turk-based Turing Test. In four out of six scenarios our prototype passed the Turing Test. In one of them the computer answer appeared even more human than the real human one.\",\"PeriodicalId\":190096,\"journal\":{\"name\":\"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PERCOMW.2018.8480147\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2018.8480147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Comprehensible Explanations of Phenomena in Home Automation Systems
The current focus in home automation is on making these systems smart and easy to install. Following advances in the area of smart assistants like Alexa and Google Home, we assume that users will not only issue commands to their smart home. They will ask their smart home for explanations why something happened. Hence, we develop and evaluate an algorithm that can explain users why a certain observable phenomenon occured. These questions can originate in the complexity of smart home systems, i.e., the system did something unexpected and the users wonders what caused it. Furthermore, users might ask the system about phenomena caused by their roommates. To evaluate our prototype, we analyze the difference between answers given by humans and those generated by our prototype. Therefore, we conducted an Amazon Mechanical Turk-based Turing Test. In four out of six scenarios our prototype passed the Turing Test. In one of them the computer answer appeared even more human than the real human one.