通过眼镜计算机自动捕获和提供辅助任务指导:GlaciAR系统

T. Leelasawassuk, D. Damen, W. Mayol-Cuevas
{"title":"通过眼镜计算机自动捕获和提供辅助任务指导:GlaciAR系统","authors":"T. Leelasawassuk, D. Damen, W. Mayol-Cuevas","doi":"10.1145/3041164.3041185","DOIUrl":null,"url":null,"abstract":"In this paper we describe and evaluate an assistive mixed reality system that aims to augment users in tasks by combining automated and unsupervised information collection with minimally invasive video guides. The result is a fully self-contained system that we call GlaciAR (Glass-enabled Contextual Interactions for Augmented Reality). It operates by extracting contextual interactions from observing users performing actions. GlaciAR is able to i) automatically determine moments of relevance based on a head motion attention model, ii) automatically produce video guidance information, iii) trigger these guides based on an object detection method, iv) learn without supervision from observing multiple users and v) operate fully on-board a current eyewear computer (Google Glass). We describe the components of GlaciAR together with user evaluations on three tasks. We see this work as a first step toward scaling up the notoriously difficult authoring problem in guidance systems and an exploration of enhancing user natural abilities via minimally invasive visual cues.","PeriodicalId":210662,"journal":{"name":"Proceedings of the 8th Augmented Human International Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Automated capture and delivery of assistive task guidance with an eyewear computer: the GlaciAR system\",\"authors\":\"T. Leelasawassuk, D. Damen, W. Mayol-Cuevas\",\"doi\":\"10.1145/3041164.3041185\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we describe and evaluate an assistive mixed reality system that aims to augment users in tasks by combining automated and unsupervised information collection with minimally invasive video guides. The result is a fully self-contained system that we call GlaciAR (Glass-enabled Contextual Interactions for Augmented Reality). It operates by extracting contextual interactions from observing users performing actions. GlaciAR is able to i) automatically determine moments of relevance based on a head motion attention model, ii) automatically produce video guidance information, iii) trigger these guides based on an object detection method, iv) learn without supervision from observing multiple users and v) operate fully on-board a current eyewear computer (Google Glass). We describe the components of GlaciAR together with user evaluations on three tasks. We see this work as a first step toward scaling up the notoriously difficult authoring problem in guidance systems and an exploration of enhancing user natural abilities via minimally invasive visual cues.\",\"PeriodicalId\":210662,\"journal\":{\"name\":\"Proceedings of the 8th Augmented Human International Conference\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 8th Augmented Human International Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3041164.3041185\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th Augmented Human International Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3041164.3041185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

在本文中,我们描述并评估了一种辅助混合现实系统,该系统旨在通过将自动和无监督的信息收集与微创视频指南相结合来增强任务中的用户。其结果是一个完全独立的系统,我们称之为GlaciAR (Glass-enabled Contextual Interactions for Augmented Reality)。它通过从观察用户执行操作中提取上下文交互来操作。GlaciAR能够i)基于头部运动注意模型自动确定相关时刻,ii)自动生成视频引导信息,iii)基于目标检测方法触发这些引导,iv)在没有监督的情况下通过观察多个用户进行学习,v)在当前的眼镜计算机(谷歌Glass)上完全运行。我们描述了GlaciAR的组成部分以及用户对三个任务的评估。我们认为这项工作是朝着扩大导引系统中众所周知的困难创作问题和探索通过微创视觉提示增强用户自然能力迈出的第一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated capture and delivery of assistive task guidance with an eyewear computer: the GlaciAR system
In this paper we describe and evaluate an assistive mixed reality system that aims to augment users in tasks by combining automated and unsupervised information collection with minimally invasive video guides. The result is a fully self-contained system that we call GlaciAR (Glass-enabled Contextual Interactions for Augmented Reality). It operates by extracting contextual interactions from observing users performing actions. GlaciAR is able to i) automatically determine moments of relevance based on a head motion attention model, ii) automatically produce video guidance information, iii) trigger these guides based on an object detection method, iv) learn without supervision from observing multiple users and v) operate fully on-board a current eyewear computer (Google Glass). We describe the components of GlaciAR together with user evaluations on three tasks. We see this work as a first step toward scaling up the notoriously difficult authoring problem in guidance systems and an exploration of enhancing user natural abilities via minimally invasive visual cues.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信