{"title":"多维度评估用于控制腿部机械手的增强现实头戴式显示器用户界面","authors":"Rodrigo Chacón Quesada, Y. Demiris","doi":"10.1145/3660649","DOIUrl":null,"url":null,"abstract":"\n Controlling assistive robots can be challenging for some users, especially those lacking relevant experience. Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate this task. Although extensive research regarding legged manipulators exists, comparatively little is on their UIs. Most existing UIs leverage traditional control interfaces such as joysticks, Hand-held (HH) controllers, and 2D UIs. These interfaces not only risk being unintuitive, thus discouraging interaction with the robot partner, but also draw the operator’s focus away from the task and towards the UI. This shift in attention raises additional safety concerns, particularly in potentially hazardous environments where legged manipulators are frequently deployed. Moreover, traditional interfaces limit the operators’ availability to use their hands for other tasks. Towards overcoming these limitations, in this article, we provide a user study comparing an AR Head Mounted Display (HMD) UI we developed for controlling a legged manipulator against off-the-shelf control methods for such robots. This user study involved 27 participants and 135 trials, from which we gathered over 405 completed questionnaires. These trials involved multiple navigation and manipulation tasks with varying difficulty levels using a Boston Dynamics (BD) Spot\n ®\n , a 7 DoF Kinova\n ®\n robot arm, and a Robotiq\n ®\n 2F-85 gripper that we integrated into a legged manipulator. We made the comparison between UIs across multiple dimensions relevant to a successful human-robot interaction. These dimensions include cognitive workload, technology acceptance, fluency, system usability, immersion and trust. Our study employed a factorial experimental design with participants undergoing five different conditions, generating longitudinal data. Due to potential unknown distributions and outliers in such data, using parametric methods for its analysis is questionable, and while non-parametric alternatives exist, they may lead to reduced statistical power. Therefore, to analyse the data that resulted from our experiment, we chose Bayesian data analysis as an effective alternative to address these limitations. Our results show that AR UIs can outpace HH-based control methods and reduce the cognitive requirements when designers include hands-free interactions and cognitive offloading principles into the UI. Furthermore, the use of the AR UI together with our cognitive offloading feature resulted in higher usability scores and significantly higher fluency and Technology Acceptance Model (TAM) scores. Regarding immersion, our results revealed that the response values for the Augmented Reality Immersion (ARI) questionnaire associated with the AR UI are significantly higher than those associated with the HH UI, regardless of the main interaction method with the former, i.e., hand gestures or cognitive offloading. Derived from the participants’ qualitative answers, we believe this is due to a combination of factors, of which the most important is the free use of the hands when using the HMD, as well as the ability to see the real environment without the need to divert their attention to the UI. Regarding trust, our findings did not display discernible differences in reported trust scores across UI options. However, during the manipulation phase of our user study, where participants were given the choice to select their preferred UI, they consistently reported higher levels of trust compared to the navigation category. Moreover, there was a drastic change in the percentage of participants that selected the AR UI for completing this manipulation stage after incorporating the cognitive offloading feature. Thus, trust seems to have mediated the use and non-use of the UIs in a dimension different from the ones considered in our study, i.e., delegation and reliance. Therefore, our AR HMD UI for the control of legged manipulators was found to improve human-robot interaction across several relevant dimensions, underscoring the critical role of UI design in the effective and trustworthy utilisation of robotic systems.\n","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"37 8","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Dimensional Evaluation of an Augmented Reality Head-Mounted Display User Interface for Controlling Legged Manipulators\",\"authors\":\"Rodrigo Chacón Quesada, Y. Demiris\",\"doi\":\"10.1145/3660649\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Controlling assistive robots can be challenging for some users, especially those lacking relevant experience. Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate this task. Although extensive research regarding legged manipulators exists, comparatively little is on their UIs. Most existing UIs leverage traditional control interfaces such as joysticks, Hand-held (HH) controllers, and 2D UIs. These interfaces not only risk being unintuitive, thus discouraging interaction with the robot partner, but also draw the operator’s focus away from the task and towards the UI. This shift in attention raises additional safety concerns, particularly in potentially hazardous environments where legged manipulators are frequently deployed. Moreover, traditional interfaces limit the operators’ availability to use their hands for other tasks. Towards overcoming these limitations, in this article, we provide a user study comparing an AR Head Mounted Display (HMD) UI we developed for controlling a legged manipulator against off-the-shelf control methods for such robots. This user study involved 27 participants and 135 trials, from which we gathered over 405 completed questionnaires. These trials involved multiple navigation and manipulation tasks with varying difficulty levels using a Boston Dynamics (BD) Spot\\n ®\\n , a 7 DoF Kinova\\n ®\\n robot arm, and a Robotiq\\n ®\\n 2F-85 gripper that we integrated into a legged manipulator. We made the comparison between UIs across multiple dimensions relevant to a successful human-robot interaction. These dimensions include cognitive workload, technology acceptance, fluency, system usability, immersion and trust. Our study employed a factorial experimental design with participants undergoing five different conditions, generating longitudinal data. Due to potential unknown distributions and outliers in such data, using parametric methods for its analysis is questionable, and while non-parametric alternatives exist, they may lead to reduced statistical power. Therefore, to analyse the data that resulted from our experiment, we chose Bayesian data analysis as an effective alternative to address these limitations. Our results show that AR UIs can outpace HH-based control methods and reduce the cognitive requirements when designers include hands-free interactions and cognitive offloading principles into the UI. Furthermore, the use of the AR UI together with our cognitive offloading feature resulted in higher usability scores and significantly higher fluency and Technology Acceptance Model (TAM) scores. Regarding immersion, our results revealed that the response values for the Augmented Reality Immersion (ARI) questionnaire associated with the AR UI are significantly higher than those associated with the HH UI, regardless of the main interaction method with the former, i.e., hand gestures or cognitive offloading. Derived from the participants’ qualitative answers, we believe this is due to a combination of factors, of which the most important is the free use of the hands when using the HMD, as well as the ability to see the real environment without the need to divert their attention to the UI. Regarding trust, our findings did not display discernible differences in reported trust scores across UI options. However, during the manipulation phase of our user study, where participants were given the choice to select their preferred UI, they consistently reported higher levels of trust compared to the navigation category. Moreover, there was a drastic change in the percentage of participants that selected the AR UI for completing this manipulation stage after incorporating the cognitive offloading feature. Thus, trust seems to have mediated the use and non-use of the UIs in a dimension different from the ones considered in our study, i.e., delegation and reliance. Therefore, our AR HMD UI for the control of legged manipulators was found to improve human-robot interaction across several relevant dimensions, underscoring the critical role of UI design in the effective and trustworthy utilisation of robotic systems.\\n\",\"PeriodicalId\":504644,\"journal\":{\"name\":\"ACM Transactions on Human-Robot Interaction\",\"volume\":\"37 8\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Human-Robot Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3660649\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Human-Robot Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3660649","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
对于某些用户,尤其是缺乏相关经验的用户来说,控制辅助机器人可能是一项挑战。增强现实(AR)用户界面(UI)有可能为这项任务提供便利。尽管目前已有大量关于腿部机械手的研究,但关于其用户界面的研究却相对较少。现有的用户界面大多采用传统的控制界面,如操纵杆、手持(HH)控制器和二维用户界面。这些界面不仅存在不直观的风险,从而阻碍了与机器人伙伴的互动,而且还会将操作员的注意力从任务转移到用户界面上。这种注意力的转移会引发更多的安全问题,尤其是在经常使用有脚机械手的潜在危险环境中。此外,传统的界面也限制了操作员使用双手完成其他任务。为了克服这些限制,我们在本文中提供了一项用户研究,比较了我们为控制有脚机械手而开发的 AR 头戴式显示器 (HMD) 用户界面与现成的此类机器人控制方法。这项用户研究涉及 27 名参与者和 135 次试验,我们从中收集了超过 405 份填写完毕的调查问卷。这些试验包括使用波士顿动力公司(BD)的 Spot ®、7 DoF Kinova ® 机械臂和 Robotiq ® 2F-85 抓手(我们将其集成到了腿部机械手中)完成难度不同的多项导航和操作任务。我们从与成功的人机交互相关的多个维度对用户界面进行了比较。这些维度包括认知工作量、技术接受度、流畅度、系统可用性、沉浸感和信任感。我们的研究采用了因子实验设计,让参与者接受五种不同的条件,从而产生纵向数据。由于此类数据可能存在未知分布和异常值,因此使用参数方法进行分析值得商榷,虽然存在非参数替代方法,但它们可能会导致统计能力下降。因此,在分析实验数据时,我们选择了贝叶斯数据分析作为解决这些局限性的有效替代方法。我们的研究结果表明,当设计者在用户界面中加入免提交互和认知卸载原则时,AR 用户界面可以超越基于 HH 的控制方法,并降低认知要求。此外,将 AR 用户界面与我们的认知卸载功能结合使用,可获得更高的可用性评分,流畅度和技术接受模型(TAM)评分也显著提高。在沉浸感方面,我们的研究结果表明,与增强现实用户界面相关的增强现实沉浸感(ARI)问卷的回答值明显高于与增强现实用户界面相关的沉浸感,而与前者的主要交互方式(即手势或认知卸载)无关。根据参与者的定性回答,我们认为这是由于多种因素共同作用的结果,其中最重要的是使用 HMD 时双手的自由使用,以及无需将注意力转移到用户界面就能看到真实环境的能力。在信任度方面,我们的研究结果显示,不同用户界面选项的信任度得分并无明显差异。不过,在用户研究的操作阶段,参与者可以选择自己喜欢的用户界面,与导航类别相比,他们始终报告了更高的信任度。此外,在加入认知卸载功能后,选择 AR 用户界面来完成这一操作阶段的参与者比例发生了巨大变化。因此,信任似乎在一个不同于我们研究中考虑的维度(即委托和依赖)上对用户界面的使用和不使用起到了中介作用。因此,我们发现用于控制腿部机械手的 AR HMD 用户界面在多个相关维度上改善了人与机器人之间的交互,从而强调了用户界面设计在有效和值得信赖地使用机器人系统中的关键作用。
Multi-Dimensional Evaluation of an Augmented Reality Head-Mounted Display User Interface for Controlling Legged Manipulators
Controlling assistive robots can be challenging for some users, especially those lacking relevant experience. Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate this task. Although extensive research regarding legged manipulators exists, comparatively little is on their UIs. Most existing UIs leverage traditional control interfaces such as joysticks, Hand-held (HH) controllers, and 2D UIs. These interfaces not only risk being unintuitive, thus discouraging interaction with the robot partner, but also draw the operator’s focus away from the task and towards the UI. This shift in attention raises additional safety concerns, particularly in potentially hazardous environments where legged manipulators are frequently deployed. Moreover, traditional interfaces limit the operators’ availability to use their hands for other tasks. Towards overcoming these limitations, in this article, we provide a user study comparing an AR Head Mounted Display (HMD) UI we developed for controlling a legged manipulator against off-the-shelf control methods for such robots. This user study involved 27 participants and 135 trials, from which we gathered over 405 completed questionnaires. These trials involved multiple navigation and manipulation tasks with varying difficulty levels using a Boston Dynamics (BD) Spot
®
, a 7 DoF Kinova
®
robot arm, and a Robotiq
®
2F-85 gripper that we integrated into a legged manipulator. We made the comparison between UIs across multiple dimensions relevant to a successful human-robot interaction. These dimensions include cognitive workload, technology acceptance, fluency, system usability, immersion and trust. Our study employed a factorial experimental design with participants undergoing five different conditions, generating longitudinal data. Due to potential unknown distributions and outliers in such data, using parametric methods for its analysis is questionable, and while non-parametric alternatives exist, they may lead to reduced statistical power. Therefore, to analyse the data that resulted from our experiment, we chose Bayesian data analysis as an effective alternative to address these limitations. Our results show that AR UIs can outpace HH-based control methods and reduce the cognitive requirements when designers include hands-free interactions and cognitive offloading principles into the UI. Furthermore, the use of the AR UI together with our cognitive offloading feature resulted in higher usability scores and significantly higher fluency and Technology Acceptance Model (TAM) scores. Regarding immersion, our results revealed that the response values for the Augmented Reality Immersion (ARI) questionnaire associated with the AR UI are significantly higher than those associated with the HH UI, regardless of the main interaction method with the former, i.e., hand gestures or cognitive offloading. Derived from the participants’ qualitative answers, we believe this is due to a combination of factors, of which the most important is the free use of the hands when using the HMD, as well as the ability to see the real environment without the need to divert their attention to the UI. Regarding trust, our findings did not display discernible differences in reported trust scores across UI options. However, during the manipulation phase of our user study, where participants were given the choice to select their preferred UI, they consistently reported higher levels of trust compared to the navigation category. Moreover, there was a drastic change in the percentage of participants that selected the AR UI for completing this manipulation stage after incorporating the cognitive offloading feature. Thus, trust seems to have mediated the use and non-use of the UIs in a dimension different from the ones considered in our study, i.e., delegation and reliance. Therefore, our AR HMD UI for the control of legged manipulators was found to improve human-robot interaction across several relevant dimensions, underscoring the critical role of UI design in the effective and trustworthy utilisation of robotic systems.