分析UI对智能眼镜免提交互可用性的影响

Michael Prilla, Alexander Marc Mantel
{"title":"分析UI对智能眼镜免提交互可用性的影响","authors":"Michael Prilla, Alexander Marc Mantel","doi":"10.1109/ISMAR-Adjunct54149.2021.00095","DOIUrl":null,"url":null,"abstract":"As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysing a UI’s Impact on the Usability of Hands-free Interaction on Smart Glasses\",\"authors\":\"Michael Prilla, Alexander Marc Mantel\",\"doi\":\"10.1109/ISMAR-Adjunct54149.2021.00095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.\",\"PeriodicalId\":244088,\"journal\":{\"name\":\"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00095\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着智能眼镜和其他头戴式设备(HMD)的发展,它们部署的不同用例和设置的数量也在增加。这包括用户的手无法与在此类硬件上运行的系统交互的场景,这就排除了这些设备的一些交互设计,例如徒手手势或使用连接到设备上的触摸板(例如,在框架上)。其他方式包括头部手势和基于语音的输入。然而,尽管这些界面解放了用户的双手,但它们并不那么直观:触摸、指向或点击等常见的隐喻并不适用。因此,越来越需要向用户解释这些机制,并确保它们可以用于操作这样的设备。然而,没有关于如何正确完成这一工作的可用工作。在这里提出的研究中,我们进行了一项研究,以不同的方式来支持使用头部手势和语音控制的头显。对于每种模式,在护理环境中设计并评估了抽象和明确的UI设计,以便向用户传达其使用情况,其中与患者互动需要免提交互,并且出于卫生原因。首先,研究对象内部分析的结果显示,令人惊讶的是,当将这些方法相互比较以及将它们与没有提供额外帮助的基线实现进行比较时,似乎没有太大的性能差异。用户对设计的偏好不同:参与者通常对头部手势ui有一个明确的偏好,而几乎没有注意到基于语音的ui之间的差异。对某些设计的偏好似乎并不影响在客观和主观指标(如错误率和问卷调查结果)中的表现。这表明,无论是实现对这些模式的支持都应该适应个人偏好,还是需要关注其他支持领域以提高可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analysing a UI’s Impact on the Usability of Hands-free Interaction on Smart Glasses
As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信