{"title":"分析UI对智能眼镜免提交互可用性的影响","authors":"Michael Prilla, Alexander Marc Mantel","doi":"10.1109/ISMAR-Adjunct54149.2021.00095","DOIUrl":null,"url":null,"abstract":"As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysing a UI’s Impact on the Usability of Hands-free Interaction on Smart Glasses\",\"authors\":\"Michael Prilla, Alexander Marc Mantel\",\"doi\":\"10.1109/ISMAR-Adjunct54149.2021.00095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.\",\"PeriodicalId\":244088,\"journal\":{\"name\":\"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00095\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysing a UI’s Impact on the Usability of Hands-free Interaction on Smart Glasses
As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.