Workshop on Perceptive User Interfaces最新文献

筛选
英文 中文
Perception and haptics: towards more accessible computers for motion-impaired users 感知和触觉:为运动障碍用户提供更方便的计算机
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971507
F. Hwang, S. Keates, P. Langdon, J. Clarkson, P. Robinson
{"title":"Perception and haptics: towards more accessible computers for motion-impaired users","authors":"F. Hwang, S. Keates, P. Langdon, J. Clarkson, P. Robinson","doi":"10.1145/971478.971507","DOIUrl":"https://doi.org/10.1145/971478.971507","url":null,"abstract":"For people with motion impairments, access to and independent control of a computer can be essential. Symptoms such as tremor and spasm, however, can make the typical keyboard and mouse arrangement for computer interaction difficult or even impossible to use. This paper describes three approaches to improving computer input effectivness for people with motion impairments. The three approaches are: (1) to increase the number of interaction channels, (2) to enhance commonly existing interaction channels, and (3) to make more effective use of all the available information in an existing input channel. Experiments in multimodal input, haptic feedback, user modelling, and cursor control are discussed in the context of the three approaches. A haptically enhanced keyboard emulator with perceptive capability is proposed, combining approaches in a way that improves computer access for motion impaired users.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"366 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126032578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Estimating focus of attention based on gaze and sound 根据凝视和声音估计注意力的焦点
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971505
R. Stiefelhagen, Jie Yang, A. Waibel
{"title":"Estimating focus of attention based on gaze and sound","authors":"R. Stiefelhagen, Jie Yang, A. Waibel","doi":"10.1145/971478.971505","DOIUrl":"https://doi.org/10.1145/971478.971505","url":null,"abstract":"Estimating a person's focus of attention is useful for various human-computer interaction applications, such as smart meeting rooms, where a user's goals and intent have to be monitored. In work presented here, we are interested in modeling focus of attention in a meeting situation. We have developed a system capable of estimating participants' focus of attention from multiple cues. We employ an omnidirectional camera to simultaneously track participants' faces around a meeting table and use neural networks to estimate their head poses. In addition, we use microphones to detect who is speaking. The system predicts participants' focus of attention from acoustic and visual information separately, and then combines the output of the audio- and video-based focus of attention predictors. We have evaluated the system using the data from three recorded meetings. The acoustic information has provided 8% error reduction on average compared to using a single modality.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123464111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
A pneumatic tactile alerting system for the driving environment 一个气动触觉报警系统,用于驾驶环境
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971506
Mario J. Enriquez, O. Afonin, Brent Yager, Karon E Maclean
{"title":"A pneumatic tactile alerting system for the driving environment","authors":"Mario J. Enriquez, O. Afonin, Brent Yager, Karon E Maclean","doi":"10.1145/971478.971506","DOIUrl":"https://doi.org/10.1145/971478.971506","url":null,"abstract":"Sensory overloaded environments present an opportunity for innovative design in the area of Human-Machine Interaction. In this paper we study the usefulness of a tactile display in the automobile environment. Our approach uses a simple pneumatic pump to produce pulsations of varying frequencies on the driver's hands through a car steering wheel fitted with inflatable pads. The goal of the project is to evaluate the effectiveness of such a system in alerting the driver of a possible problem, when it is used to augment the visual display presently used in automobiles. A steering wheel that provides haptic feedback using pneumatic pockets was developed to test our hypothesis. The steering wheel can pulsate at different frequencies. The system was tested in a simple multitasking paradigm on several subjects and their reaction times to different stimuli were measured and analyzed. For these experiments, we found that using a tactile feedback device lowers reaction time significantly and that modulating frequency of vibration provides extra information that can reduce the time necessary to identify a problem.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114800500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 84
A real-time head nod and shake detector 一个实时点头和摇头探测器
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971509
Ashish Kapoor, Rosalind W. Picard
{"title":"A real-time head nod and shake detector","authors":"Ashish Kapoor, Rosalind W. Picard","doi":"10.1145/971478.971509","DOIUrl":"https://doi.org/10.1145/971478.971509","url":null,"abstract":"Head nods and head shakes are non-verbal gestures used often to communicate intent, emotion and to perform conversational functions. We describe a vision-based system that detects head nods and head shakes in real time and can act as a useful and basic interface to a machine. We use an infrared sensitive camera equipped with infrared LEDs to track pupils. The directions of head movements, determined using the position of pupils, are used as observations by a discrete Hidden Markov Model (HMM) based pattern analyzer to detect when a head nod/shake occurs. The system is trained and tested on natural data from ten users gathered in the presence of varied lighting and varied facial expressions. The system as described achieves a real time recognition accuracy of 78.46% on the test dataset.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116440687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 174
The Infocockpit: providing location and place to aid human memory 信息座舱:提供位置和地点来帮助人类记忆
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971526
Desney S. Tan, Jeanine K. Stefanucci, D. Proffitt, R. Pausch
{"title":"The Infocockpit: providing location and place to aid human memory","authors":"Desney S. Tan, Jeanine K. Stefanucci, D. Proffitt, R. Pausch","doi":"10.1145/971478.971526","DOIUrl":"https://doi.org/10.1145/971478.971526","url":null,"abstract":"Our work focuses on building and evaluating computer system interfaces that make information memorable. Psychology research tells us people remember spatially distributed information based on its location relative to their body, as well as the environment in which the information was learned. We apply these principles in the implementation of a multimodal prototype system, the Infocockpit (for \"Information Cockpit\"). The Infocockpit not only uses multiple monitors surrounding the user to engage human memory for location, but also provides ambient visual and auditory displays to engage human memory for place. We report a user study demonstrating a 56% increase in memory for information presented with our Infocockpit system as compared to a standard desktop system.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126000325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
Hand tracking for human-computer interaction with Graylevel VisualGlove: turning back to the simple way 用Graylevel VisualGlove进行人机交互的手部跟踪:回到简单的方式
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971512
G. Iannizzotto, M. Villari, L. Vita
{"title":"Hand tracking for human-computer interaction with Graylevel VisualGlove: turning back to the simple way","authors":"G. Iannizzotto, M. Villari, L. Vita","doi":"10.1145/971478.971512","DOIUrl":"https://doi.org/10.1145/971478.971512","url":null,"abstract":"Recent developments in the manufacturing and marketing of low power-consumption computers, small enough to be \"worn\" by users and remain almost invisible, have reintroduced the problem of overcoming the outdated paradigm of human-computer interaction based on use of a keyboard and a mouse. Approaches based on visual tracking seem to be the most promising, as they do not require any additional devices (gloves, etc.) and can be implemented with off-the-shelf devices such as webcams. Unfortunately, extremely variable lighting conditions and the high degree of computational complexity of most of the algorithms available make these techniques hard to use in systems where CPU power consumption is a major issue (e.g. wearable computers) and in situations where lighting conditions are critical (outdoors, in the dark, etc.). This paper describes the work carried out at VisiLAB at the University of Messina as part of the VisualGlove Project to develop a real-time, vision-based device able to operate as a substitute for the mouse and other similar input devices. It is able to operate in a wide range of lighting conditions, using a low-cost webcam and running on an entry-level PC. As explained in detail below, particular care has been taken to reduce computational complexity, in the attempt to reduce the amount of resources needed for the whole system to work.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124738043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
"Those look similar!" issues in automating gesture design advice “这些看起来很相似!”在自动手势设计建议中的问题
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971510
A. Long, J. Landay, L. Rowe
{"title":"\"Those look similar!\" issues in automating gesture design advice","authors":"A. Long, J. Landay, L. Rowe","doi":"10.1145/971478.971510","DOIUrl":"https://doi.org/10.1145/971478.971510","url":null,"abstract":"Today, state-of-the-art user interfaces often include new interaction technologies, such as speech recognition, computer vision, or gesture recognition. Unfortunately, these technologies are difficult for most interface designers to incorporate into their interfaces, and traditional tools do not help designers with these technologies. One such technology is pen gestures, which are valuable as a powerful pen-based interaction technique, but are difficult to design well. We developed an interface design tool that uses unsolicited advice to help designers of pen-based user interfaces create pen gestures. Specifically, the tool warns designers when their gestures will be perceived to be similar and advises designers how to make their gestures less similar. We believe that the issues we encountered while designing an interface for advice and implementing this advice will reappear in design tools for other novel input technologies, such as hand and body gestures.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115275553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Visually prototyping perceptual user interfaces through multimodal storyboarding 通过多模态故事板对感知用户界面进行视觉原型设计
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971501
A. K. Sinha, J. Landay
{"title":"Visually prototyping perceptual user interfaces through multimodal storyboarding","authors":"A. K. Sinha, J. Landay","doi":"10.1145/971478.971501","DOIUrl":"https://doi.org/10.1145/971478.971501","url":null,"abstract":"We are applying our knowledge in designing informal prototyping tools for user interface design to create an interactive visual prototyping tool for perceptual user interfaces. Our tool allows a designer to quickly map out certain types of multimodal, cross-device user interface scenarios. These sketched designs form a multimodal storyboard that can then be executed, quickly testing the interaction and collecting feedback about refinements necessary for the design. By relying on visual prototyping, our multimodal storyboarding tool simplifies and speeds perceptual user interface prototyping and opens up the challenging space of perceptual user interface design to non-programmers.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133670731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Design issues for vision-based computer interaction systems 基于视觉的计算机交互系统的设计问题
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971511
R. Kjeldsen, J. Hartman
{"title":"Design issues for vision-based computer interaction systems","authors":"R. Kjeldsen, J. Hartman","doi":"10.1145/971478.971511","DOIUrl":"https://doi.org/10.1145/971478.971511","url":null,"abstract":"Computer Vision and other direct sensing technologies have progressed to the point where we can detect many aspects of a user's activity reliably and in real time. Simply recognizing the activity is not enough, however. If perceptual interaction is going to become a part of the user interface, we must turn our attention to the tasks we wish to perform and methods to effectively perform them.This paper attempts to further our understanding of vision-based interaction by looking at the steps involved in building practical systems, giving examples from several existing systems. We classify the types of tasks well suited to this type of interaction as pointing, control or selection, and discuss interaction techniques for each class. We address the factors affecting the selection of the control action, and various types of control signals that can be extracted from visual input. We present our design for widgets to perform different types of tasks, and techniques, similar to those used with established user interface devices, to give the user the type of control they need to perform the task well. We look at ways to combine individual widgets into Visual Interfaces that allow the user to perform these tasks both concurrently and sequentially.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Privacy protection by concealing persons in circumstantial video image 间接视频图像中隐藏人的隐私保护
Workshop on Perceptive User Interfaces Pub Date : 2001-11-15 DOI: 10.1145/971478.971519
S. Tansuriyavong, S. Hanaki
{"title":"Privacy protection by concealing persons in circumstantial video image","authors":"S. Tansuriyavong, S. Hanaki","doi":"10.1145/971478.971519","DOIUrl":"https://doi.org/10.1145/971478.971519","url":null,"abstract":"A circumstantial video image should convey sufficient situation information, while protecting specific person's privacy information in the scene. This paper proposes a system which automatically identifies a person by face recognition, tracks him or her, and displays the image of the person in modified form such as silhouette with or without name, or only name in characters (i.e. invisible person). A subjective evaluation experiment was carried out in order to know how people prefer each modified video image either from observer or subject viewpoint. It turned out that the silhouette display with name list seems to be most appropriate from the balance between protecting privacy and conveying situation information in circumstantial video image.","PeriodicalId":416822,"journal":{"name":"Workshop on Perceptive User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134044812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信