Augmented Reality Application for HoloLens Dedicated to the Accuracy Test: Evolution and Results

Julien Barbier, Franck Gechter, Sylvain Grosdemouge
{"title":"Augmented Reality Application for HoloLens Dedicated to the Accuracy Test: Evolution and Results","authors":"Julien Barbier, Franck Gechter, Sylvain Grosdemouge","doi":"10.54941/ahfe1002097","DOIUrl":null,"url":null,"abstract":"Augmented Reality (AR) proposes new ways to visualize and to interact with virtual objects. Depending on the target interaction modality and the application requirements, different type of devices can be chosen. If AR on smartphones can propose a Graphical User Interface without impacting the immersion, AR headset procures a more immersive experience, the interaction modality relying mainly on hand gesture control even if various types of interactions modalities have been explored in literature. One of the most widespread headsets is the Microsoft Hololens which offers a documentation about the set-up of interactions between the users and virtual entities. However, the ergonomic of the proposed hand gesture needs to be learnt and is not intuitive for most people and cannot be well fitted depending on the type of application.The goal of this paper is to test, in a medical application perspective, the ergonomic of different types of human machine interface in AR, the impact of changes made by the return of the users and the usability of the final human machine interface. An application dedicated to the accuracy test of the headset has been made. This application has been tested by different users who never had any previous experience with AR headset before. The virtual object used inside this application is a simple cube to simplify the interaction with the virtual entity as much as possible. After that, a users’ return of experience protocol has been propose. It has been used to feed proposals for changing interaction modalities in the application. This return of experience is based on the estimation of the ease to place the virtual entity relatively to elements of the real world, the estimation of the ease to orientate the entity and the estimation of quality of the visualization. At the end of the protocol, the final human machine interface is tested, and a comparison is made between the different types of interaction modalities proposed.Among the proposed solutions, the one without any graphical user interface artifacts (i.e. using only hand tracking to interact with the cube) results in bad comprehension and manipulation that can lead to prevent the use of this application. One explanation can be tied to the lack of precise hand tracking which can result in bad hand pose. The second solution, based on the addition of a 3D plane GUI, demonstrates a more precise appropriation of the AR context. However, the GUI plane must be positioned manually by the user to have better result. Besides, results shows that the cube must be rendered with boxes to delimit the edge and thus helping the user to make the cube closer to his/her perception expectations.These experiments showed that the use of world anchored graphical user interface for high accuracy application is needed to provide a better understanding for newcomers and can be considered as an intuitive way to use the application. If for most entertainment applications the hand interaction can be sufficient, the hand tracking is not accurate enough for the moment to allow a high precision positioning of virtual entities for medical application.","PeriodicalId":389399,"journal":{"name":"Healthcare and Medical Devices","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare and Medical Devices","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Augmented Reality (AR) proposes new ways to visualize and to interact with virtual objects. Depending on the target interaction modality and the application requirements, different type of devices can be chosen. If AR on smartphones can propose a Graphical User Interface without impacting the immersion, AR headset procures a more immersive experience, the interaction modality relying mainly on hand gesture control even if various types of interactions modalities have been explored in literature. One of the most widespread headsets is the Microsoft Hololens which offers a documentation about the set-up of interactions between the users and virtual entities. However, the ergonomic of the proposed hand gesture needs to be learnt and is not intuitive for most people and cannot be well fitted depending on the type of application.The goal of this paper is to test, in a medical application perspective, the ergonomic of different types of human machine interface in AR, the impact of changes made by the return of the users and the usability of the final human machine interface. An application dedicated to the accuracy test of the headset has been made. This application has been tested by different users who never had any previous experience with AR headset before. The virtual object used inside this application is a simple cube to simplify the interaction with the virtual entity as much as possible. After that, a users’ return of experience protocol has been propose. It has been used to feed proposals for changing interaction modalities in the application. This return of experience is based on the estimation of the ease to place the virtual entity relatively to elements of the real world, the estimation of the ease to orientate the entity and the estimation of quality of the visualization. At the end of the protocol, the final human machine interface is tested, and a comparison is made between the different types of interaction modalities proposed.Among the proposed solutions, the one without any graphical user interface artifacts (i.e. using only hand tracking to interact with the cube) results in bad comprehension and manipulation that can lead to prevent the use of this application. One explanation can be tied to the lack of precise hand tracking which can result in bad hand pose. The second solution, based on the addition of a 3D plane GUI, demonstrates a more precise appropriation of the AR context. However, the GUI plane must be positioned manually by the user to have better result. Besides, results shows that the cube must be rendered with boxes to delimit the edge and thus helping the user to make the cube closer to his/her perception expectations.These experiments showed that the use of world anchored graphical user interface for high accuracy application is needed to provide a better understanding for newcomers and can be considered as an intuitive way to use the application. If for most entertainment applications the hand interaction can be sufficient, the hand tracking is not accurate enough for the moment to allow a high precision positioning of virtual entities for medical application.
用于精度测试的HoloLens增强现实应用:演变和结果
增强现实(AR)提出了可视化和与虚拟对象交互的新方法。根据目标交互方式和应用程序需求,可以选择不同类型的设备。如果说智能手机上的AR可以在不影响沉浸感的情况下提出图形用户界面,AR头戴式耳机则可以获得更沉浸式的体验,尽管文献中已经探索了各种类型的交互方式,但交互方式主要依赖于手势控制。最广泛的头显之一是微软的Hololens,它提供了关于用户和虚拟实体之间交互设置的文档。然而,所提出的手势的人体工程学需要学习,对大多数人来说并不直观,也不能很好地适应不同的应用类型。本文的目的是从医疗应用的角度,测试不同类型的人机界面在AR中的人机工程学,用户返回所做的更改的影响以及最终人机界面的可用性。制作了一个专门用于耳机精度测试的应用程序。这个应用程序已经由不同的用户进行了测试,这些用户以前从未有过AR耳机的使用经验。此应用程序中使用的虚拟对象是一个简单的多维数据集,以尽可能地简化与虚拟实体的交互。在此基础上,提出了用户体验返回协议。它已被用于为更改应用程序中的交互模式提供建议。这种经验的返回是基于对虚拟实体相对于现实世界元素放置的容易程度的估计,对实体定位的容易程度的估计以及对可视化质量的估计。在协议的最后,对最终的人机界面进行了测试,并对所提出的不同类型的交互模式进行了比较。在建议的解决方案中,没有任何图形用户界面构件(即仅使用手动跟踪与多维数据集交互)的解决方案会导致理解和操作不良,从而导致无法使用该应用程序。一种解释可能与缺乏精确的手部跟踪有关,这可能导致糟糕的手部姿势。第二种解决方案,基于3D平面GUI的添加,展示了更精确的AR上下文挪用。但是,GUI平面必须由用户手动定位才能获得更好的效果。此外,结果表明,立方体必须用方框来划定边缘,从而帮助用户使立方体更接近他/她的感知期望。这些实验表明,高精度应用程序需要使用世界锚定图形用户界面来为新手提供更好的理解,并且可以认为是一种直观的使用应用程序的方式。如果对于大多数娱乐应用来说,手部交互就足够了,那么手部跟踪目前还不够精确,无法为医疗应用提供高精度的虚拟实体定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信