Haptic Gaze-Tracking Based Perception of Graphical User Interfaces

S. Meers, K. Ward
{"title":"Haptic Gaze-Tracking Based Perception of Graphical User Interfaces","authors":"S. Meers, K. Ward","doi":"10.1109/IV.2007.59","DOIUrl":null,"url":null,"abstract":"This paper presents a novel human-computer interface that enables the computer display to be perceived without any use of the eyes. Our system works by tracking the user's head position and orientation to obtain their 'gaze' point on a virtual screen, and by indicating to the user what object is present at the gaze location via haptic feedback to the fingers and synthetic speech or Braille text. This is achieved by using the haptic vibration frequency delivered to the fingers to indicate the type of screen object at the gaze position, and the vibration amplitude to indicate the screen object's window-layer, when the object is contained in overlapping windows. Also, objects that are gazed at momentarily have their name output to the user via a Braille display or synthetic speech. Our experiments have shown that by browsing over the screen and receiving haptic and voice (or Braille) feedback in this manner, the user is able to acquire a mental two-dimensional representation of the virtual screen and its content without any use of the eyes. This form of blind screen perception can then be used to locate screen objects and controls and manipulate them with the mouse or via gaze control. Our experimental results are provided in which we demonstrate how this form of blind screen perception can effectively be used to exercise point-and-click and drag-and-drop control of desktop objects and open windows by using the mouse, or the user's head pose, without any use of the eyes.","PeriodicalId":177429,"journal":{"name":"2007 11th International Conference Information Visualization (IV '07)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 11th International Conference Information Visualization (IV '07)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IV.2007.59","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

This paper presents a novel human-computer interface that enables the computer display to be perceived without any use of the eyes. Our system works by tracking the user's head position and orientation to obtain their 'gaze' point on a virtual screen, and by indicating to the user what object is present at the gaze location via haptic feedback to the fingers and synthetic speech or Braille text. This is achieved by using the haptic vibration frequency delivered to the fingers to indicate the type of screen object at the gaze position, and the vibration amplitude to indicate the screen object's window-layer, when the object is contained in overlapping windows. Also, objects that are gazed at momentarily have their name output to the user via a Braille display or synthetic speech. Our experiments have shown that by browsing over the screen and receiving haptic and voice (or Braille) feedback in this manner, the user is able to acquire a mental two-dimensional representation of the virtual screen and its content without any use of the eyes. This form of blind screen perception can then be used to locate screen objects and controls and manipulate them with the mouse or via gaze control. Our experimental results are provided in which we demonstrate how this form of blind screen perception can effectively be used to exercise point-and-click and drag-and-drop control of desktop objects and open windows by using the mouse, or the user's head pose, without any use of the eyes.
基于触觉注视跟踪的图形用户界面感知
本文提出了一种新颖的人机界面,使计算机显示无需使用眼睛即可感知。我们的系统通过跟踪用户的头部位置和方向来获得他们在虚拟屏幕上的“凝视”点,并通过手指的触觉反馈和合成语音或盲文来指示用户在凝视位置有什么物体。这是通过使用传递给手指的触觉振动频率来指示凝视位置的屏幕对象类型,以及振动幅度来指示屏幕对象的窗口层,当对象包含在重叠的窗口中时。此外,被短暂注视的对象的名称会通过盲文显示或合成语音输出给用户。我们的实验表明,通过浏览屏幕并以这种方式接收触觉和声音(或盲文)反馈,用户能够在不使用眼睛的情况下获得虚拟屏幕及其内容的心理二维表示。这种形式的盲屏感知可以用来定位屏幕对象和控制,并通过鼠标或凝视控制来操纵它们。我们提供了实验结果,其中我们展示了这种形式的盲屏感知如何有效地用于使用鼠标或用户的头部姿势来操作桌面对象的点击和拖放控制,并通过使用鼠标或用户的头部姿势打开窗口,而无需使用眼睛。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信