通过个体交互历史增加多模态交互的鲁棒性

Felix Schüssel, F. Honold, N. Bubalo, M. Weber
{"title":"通过个体交互历史增加多模态交互的鲁棒性","authors":"Felix Schüssel, F. Honold, N. Bubalo, M. Weber","doi":"10.1145/3011263.3011273","DOIUrl":null,"url":null,"abstract":"Multimodal input fusion can be considered a well researched topic and yet it is rarely found in real world applications. One reason for this could be the lack of robustness in real world situations, especially regarding unimodal recognition technologies like speech and gesture, that tend to produce erroneous inputs that can not be detected by the subsequent multimodal input fusion mechanism. Previous work implying the possibility to detect and overcome such errors through knowledge of individual temporal behaviors has neither provided a real-time implementation nor evaluated the real benefit of such an approach. We present such an implementation of applying individual interaction histories in order to increase the robustness of multimodal inputs within a smartwatch scenario. We show how such knowledge can be created and maintained at runtime, present evaluation data from an experiment conducted in a realistic scenario, and compare the approach to the state of the art known from literature. Our approach is ready to use in other applications and existing systems, with the prospect to increase the overall robustness of future multimodal systems.","PeriodicalId":272696,"journal":{"name":"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Increasing robustness of multimodal interaction via individual interaction histories\",\"authors\":\"Felix Schüssel, F. Honold, N. Bubalo, M. Weber\",\"doi\":\"10.1145/3011263.3011273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multimodal input fusion can be considered a well researched topic and yet it is rarely found in real world applications. One reason for this could be the lack of robustness in real world situations, especially regarding unimodal recognition technologies like speech and gesture, that tend to produce erroneous inputs that can not be detected by the subsequent multimodal input fusion mechanism. Previous work implying the possibility to detect and overcome such errors through knowledge of individual temporal behaviors has neither provided a real-time implementation nor evaluated the real benefit of such an approach. We present such an implementation of applying individual interaction histories in order to increase the robustness of multimodal inputs within a smartwatch scenario. We show how such knowledge can be created and maintained at runtime, present evaluation data from an experiment conducted in a realistic scenario, and compare the approach to the state of the art known from literature. Our approach is ready to use in other applications and existing systems, with the prospect to increase the overall robustness of future multimodal systems.\",\"PeriodicalId\":272696,\"journal\":{\"name\":\"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3011263.3011273\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3011263.3011273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

多模态输入融合可以被认为是一个很好的研究课题,但在实际应用中却很少发现。其中一个原因可能是在现实世界中缺乏鲁棒性,特别是在语音和手势等单模态识别技术方面,这往往会产生错误的输入,而这些输入无法被随后的多模态输入融合机制检测到。以前的工作暗示了通过个体时间行为的知识来检测和克服这种错误的可能性,但既没有提供实时实现,也没有评估这种方法的真正好处。我们提出了这样一个应用个人交互历史的实现,以增加智能手表场景中多模态输入的鲁棒性。我们展示了如何在运行时创建和维护这些知识,给出了在现实场景中进行的实验的评估数据,并将该方法与文献中已知的技术状态进行了比较。我们的方法已准备好用于其他应用和现有系统,并有望提高未来多模态系统的整体鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Increasing robustness of multimodal interaction via individual interaction histories
Multimodal input fusion can be considered a well researched topic and yet it is rarely found in real world applications. One reason for this could be the lack of robustness in real world situations, especially regarding unimodal recognition technologies like speech and gesture, that tend to produce erroneous inputs that can not be detected by the subsequent multimodal input fusion mechanism. Previous work implying the possibility to detect and overcome such errors through knowledge of individual temporal behaviors has neither provided a real-time implementation nor evaluated the real benefit of such an approach. We present such an implementation of applying individual interaction histories in order to increase the robustness of multimodal inputs within a smartwatch scenario. We show how such knowledge can be created and maintained at runtime, present evaluation data from an experiment conducted in a realistic scenario, and compare the approach to the state of the art known from literature. Our approach is ready to use in other applications and existing systems, with the prospect to increase the overall robustness of future multimodal systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信