Audio-visual training and feedback to learn touch-based gestures

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Sadia Rubab, Muhammad Wajeeh Uz Zaman, Umer Rashid, Lingyun Yu, Yingcai Wu
{"title":"Audio-visual training and feedback to learn touch-based gestures","authors":"Sadia Rubab, Muhammad Wajeeh Uz Zaman, Umer Rashid, Lingyun Yu, Yingcai Wu","doi":"10.1007/s12650-024-01012-x","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>To help people learn the touch-based gestures needed to perform various tasks, researchers commonly use training from an experimenter. However, it leads to dependence on a person, as well as memory problems with increasing number and complexity of gestures. Several on-demand training and feedback methods have been proposed that provide constant support and help people learn novel gestures without human assistance. Non-speech audio with the visual clue, a gesture training/feedback method, could be extended in the interactive visualization tools. However, the literature offers several options in the non-speech audio and visual clues but no comparisons. We conducted an online study to identify suitable non-speech audio representations with the visual clues of 12 touch-based gestures. For each audiovisual combination, we evaluated the thinking, time demand, frustration, understanding, and learnability of 45 participants. We found that the visual clue of a gesture, either iconic or ghost, did not affect the suitability of an audio representation. However, the preferences in audio channels and audio patterns differed for the different gestures and their directions. We implemented the training/feedback method in an Infovis tool. The evaluation showed significant use of the method by the participants to explore the tool.</p><h3 data-test=\"abstract-sub-heading\">Graphical Abstract</h3>","PeriodicalId":54756,"journal":{"name":"Journal of Visualization","volume":"21 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visualization","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12650-024-01012-x","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

To help people learn the touch-based gestures needed to perform various tasks, researchers commonly use training from an experimenter. However, it leads to dependence on a person, as well as memory problems with increasing number and complexity of gestures. Several on-demand training and feedback methods have been proposed that provide constant support and help people learn novel gestures without human assistance. Non-speech audio with the visual clue, a gesture training/feedback method, could be extended in the interactive visualization tools. However, the literature offers several options in the non-speech audio and visual clues but no comparisons. We conducted an online study to identify suitable non-speech audio representations with the visual clues of 12 touch-based gestures. For each audiovisual combination, we evaluated the thinking, time demand, frustration, understanding, and learnability of 45 participants. We found that the visual clue of a gesture, either iconic or ghost, did not affect the suitability of an audio representation. However, the preferences in audio channels and audio patterns differed for the different gestures and their directions. We implemented the training/feedback method in an Infovis tool. The evaluation showed significant use of the method by the participants to explore the tool.

Graphical Abstract

Abstract Image

学习触控手势的视听培训和反馈
摘要 为了帮助人们学习执行各种任务所需的触摸手势,研究人员通常采用由实验人员进行培训的方法。然而,随着手势数量和复杂程度的增加,这种方法会导致对人的依赖以及记忆问题。目前已经提出了几种按需训练和反馈方法,它们可以提供持续支持,帮助人们在没有人类帮助的情况下学习新手势。非语音音频加上视觉线索这种手势训练/反馈方法可以扩展到交互式可视化工具中。然而,文献提供了几种非语音音频和视觉线索的选择,但没有进行比较。我们进行了一项在线研究,以确定合适的非语音音频表述与 12 种基于触摸的手势的视觉线索。对于每种视听组合,我们评估了 45 名参与者的思维、时间需求、挫败感、理解力和可学性。我们发现,手势的视觉线索,无论是图标还是重影,都不会影响语音表述的适宜性。然而,不同的手势及其方向对音频通道和音频模式的偏好是不同的。我们在 Infovis 工具中实施了这种训练/反馈方法。评估结果显示,参与者在探索工具时大量使用了该方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Visualization
Journal of Visualization COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY
CiteScore
3.40
自引率
5.90%
发文量
79
审稿时长
>12 weeks
期刊介绍: Visualization is an interdisciplinary imaging science devoted to making the invisible visible through the techniques of experimental visualization and computer-aided visualization. The scope of the Journal is to provide a place to exchange information on the latest visualization technology and its application by the presentation of latest papers of both researchers and technicians.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信