On-Body IE: A Head-Mounted Multimodal Augmented Reality System for Learning and Recalling Faces

Daniel Sonntag, Takumi Toyama
{"title":"On-Body IE: A Head-Mounted Multimodal Augmented Reality System for Learning and Recalling Faces","authors":"Daniel Sonntag, Takumi Toyama","doi":"10.1109/IE.2013.47","DOIUrl":null,"url":null,"abstract":"We present a new augmented reality (AR) system for knowledge-intensive location-based expert work. The multimodal interaction system combines multiple on-body input and output devices: a speech-based dialogue system, a head-mounted augmented reality display (HMD), and a head-mounted eyetracker. The interaction devices have been selected to augment and improve the expert work in a specific medical application context which shows its potential. In the sensitive domain of examining patients in a cancer screening program we try to combine several active user input devices in the most convenient way for both the patient and the doctor. The resulting multimodal AR is an on-body intelligent environment (IE) and has the potential to yield higher performance outcomes and provides a direct data acquisition control mechanism. It leverages the doctor's capabilities of recalling the specific patient context by a virtual, context-based patient-specific ”external brain” for the doctor which can remember patient faces and adapts the virtual augmentation according to the specific patient observation and finding context. In addition, patient data can be displayed on the HMD-triggered by voice or object/patient recognition. The learned (patient) faces and immovable objects (e.g., a big medical device) define the environmental clues to make the context-dependent recognition model part of the IE to achieve specific goals for the doctors in the hospital routine.","PeriodicalId":353156,"journal":{"name":"2013 9th International Conference on Intelligent Environments","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 9th International Conference on Intelligent Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IE.2013.47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

We present a new augmented reality (AR) system for knowledge-intensive location-based expert work. The multimodal interaction system combines multiple on-body input and output devices: a speech-based dialogue system, a head-mounted augmented reality display (HMD), and a head-mounted eyetracker. The interaction devices have been selected to augment and improve the expert work in a specific medical application context which shows its potential. In the sensitive domain of examining patients in a cancer screening program we try to combine several active user input devices in the most convenient way for both the patient and the doctor. The resulting multimodal AR is an on-body intelligent environment (IE) and has the potential to yield higher performance outcomes and provides a direct data acquisition control mechanism. It leverages the doctor's capabilities of recalling the specific patient context by a virtual, context-based patient-specific ”external brain” for the doctor which can remember patient faces and adapts the virtual augmentation according to the specific patient observation and finding context. In addition, patient data can be displayed on the HMD-triggered by voice or object/patient recognition. The learned (patient) faces and immovable objects (e.g., a big medical device) define the environmental clues to make the context-dependent recognition model part of the IE to achieve specific goals for the doctors in the hospital routine.
身体上的IE:一个头戴式多模态增强现实系统,用于学习和回忆面孔
我们提出了一种新的增强现实(AR)系统,用于知识密集型的基于位置的专家工作。多模态交互系统结合了多个身体输入和输出设备:基于语音的对话系统,头戴式增强现实显示器(HMD)和头戴式眼动仪。选择了交互设备来增强和改进特定医疗应用环境中的专家工作,显示了其潜力。在癌症筛查项目中对患者进行检查的敏感领域,我们试图以最方便患者和医生的方式组合几个活跃的用户输入设备。由此产生的多模态AR是一种车载智能环境(IE),有可能产生更高的性能结果,并提供直接的数据采集控制机制。它利用医生回忆特定患者情境的能力,通过一个虚拟的、基于情境的患者特定“外部大脑”,医生可以记住患者的面孔,并根据特定的患者观察和寻找情境来适应虚拟增强。此外,患者数据可以显示在hmd上,由语音或对象/患者识别触发。被学习的(病人)面孔和不可移动的物体(如大型医疗设备)定义了环境线索,使上下文相关的识别模型成为IE的一部分,从而在医院常规中为医生实现特定目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信