Towards Comprehensible Explanations of Phenomena in Home Automation Systems

Matthäus Wander, V. Matkovic, Torben Weis, Michael Bischof, Lorenz Schwittmann
{"title":"Towards Comprehensible Explanations of Phenomena in Home Automation Systems","authors":"Matthäus Wander, V. Matkovic, Torben Weis, Michael Bischof, Lorenz Schwittmann","doi":"10.1109/PERCOMW.2018.8480147","DOIUrl":null,"url":null,"abstract":"The current focus in home automation is on making these systems smart and easy to install. Following advances in the area of smart assistants like Alexa and Google Home, we assume that users will not only issue commands to their smart home. They will ask their smart home for explanations why something happened. Hence, we develop and evaluate an algorithm that can explain users why a certain observable phenomenon occured. These questions can originate in the complexity of smart home systems, i.e., the system did something unexpected and the users wonders what caused it. Furthermore, users might ask the system about phenomena caused by their roommates. To evaluate our prototype, we analyze the difference between answers given by humans and those generated by our prototype. Therefore, we conducted an Amazon Mechanical Turk-based Turing Test. In four out of six scenarios our prototype passed the Turing Test. In one of them the computer answer appeared even more human than the real human one.","PeriodicalId":190096,"journal":{"name":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2018.8480147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The current focus in home automation is on making these systems smart and easy to install. Following advances in the area of smart assistants like Alexa and Google Home, we assume that users will not only issue commands to their smart home. They will ask their smart home for explanations why something happened. Hence, we develop and evaluate an algorithm that can explain users why a certain observable phenomenon occured. These questions can originate in the complexity of smart home systems, i.e., the system did something unexpected and the users wonders what caused it. Furthermore, users might ask the system about phenomena caused by their roommates. To evaluate our prototype, we analyze the difference between answers given by humans and those generated by our prototype. Therefore, we conducted an Amazon Mechanical Turk-based Turing Test. In four out of six scenarios our prototype passed the Turing Test. In one of them the computer answer appeared even more human than the real human one.
迈向家庭自动化系统现象的可理解解释
目前家庭自动化的重点是使这些系统智能化和易于安装。随着Alexa和Google Home等智能助手领域的进步,我们认为用户不仅会向他们的智能家居发出命令。他们会问他们的智能家居为什么会发生一些事情。因此,我们开发并评估了一种算法,该算法可以向用户解释为什么会发生某种可观察到的现象。这些问题可能源于智能家居系统的复杂性,即系统做了一些意想不到的事情,用户想知道是什么原因造成的。此外,用户可能会向系统询问由他们的室友引起的现象。为了评估我们的原型,我们分析了人类给出的答案和我们的原型生成的答案之间的差异。因此,我们进行了一个基于亚马逊机械土耳其的图灵测试。我们的原型在6个场景中有4个通过了图灵测试。在其中一次测试中,计算机的回答甚至比真人的回答更人性化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信