Proceedings of the 18th annual ACM symposium on User interface software and technology最新文献

筛选
英文 中文
DT controls: adding identity to physical interfaces DT控件:为物理接口添加身份
P. Dietz, B. Harsham, C. Forlines, D. Leigh, W. Yerazunis, S. Shipman, B. Schmidt-Nielsen, Kathy Ryall
{"title":"DT controls: adding identity to physical interfaces","authors":"P. Dietz, B. Harsham, C. Forlines, D. Leigh, W. Yerazunis, S. Shipman, B. Schmidt-Nielsen, Kathy Ryall","doi":"10.1145/1095034.1095075","DOIUrl":"https://doi.org/10.1145/1095034.1095075","url":null,"abstract":"In this paper, we show how traditional physical interface components such as switches, levers, knobs and touch screens can be easily modified to identify who is activating each control. This allows us to change the function per-formed by the control, and the sensory feedback provided by the control itself, dependent upon the user. An auditing function is also available that logs each user's actions. We describe a number of example usage scenarios for our tech-nique, and present two sample implementations.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122707244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis ViewPointer:轻量级的无校准眼动追踪,无处不在的免提指示
John D. Smith, Roel Vertegaal, Changuk Sohn
{"title":"ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis","authors":"John D. Smith, Roel Vertegaal, Changuk Sohn","doi":"10.1145/1095034.1095043","DOIUrl":"https://doi.org/10.1145/1095034.1095043","url":null,"abstract":"We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user's eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122059944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Low-cost multi-touch sensing through frustrated total internal reflection 基于全内反射的低成本多点触控传感
Jefferson Y. Han
{"title":"Low-cost multi-touch sensing through frustrated total internal reflection","authors":"Jefferson Y. Han","doi":"10.1145/1095034.1095054","DOIUrl":"https://doi.org/10.1145/1095034.1095054","url":null,"abstract":"This paper describes a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces based on frustrated total internal reflection. We review previous applications of this phenomenon to sensing, provide implementation details, discuss results from our initial prototype, and outline future directions.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127668558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1160
eyeLook: using attention to facilitate mobile media consumption eyeLook:利用注意力促进移动媒体消费
C. Dickie, Roel Vertegaal, Changuk Sohn, D. Cheng
{"title":"eyeLook: using attention to facilitate mobile media consumption","authors":"C. Dickie, Roel Vertegaal, Changuk Sohn, D. Cheng","doi":"10.1145/1095034.1095050","DOIUrl":"https://doi.org/10.1145/1095034.1095050","url":null,"abstract":"One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134116189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
PapierCraft: a command system for interactive paper paperercraft:交互式纸张的命令系统
Chunyuan Liao, François Guimbretière, K. Hinckley
{"title":"PapierCraft: a command system for interactive paper","authors":"Chunyuan Liao, François Guimbretière, K. Hinckley","doi":"10.1145/1095034.1095074","DOIUrl":"https://doi.org/10.1145/1095034.1095074","url":null,"abstract":"Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements.To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115534563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
Supporting interaction in augmented reality in the presence of uncertain spatial knowledge 支持在不确定空间知识存在的增强现实中的交互
E. M. Coelho, B. MacIntyre, S. Julier
{"title":"Supporting interaction in augmented reality in the presence of uncertain spatial knowledge","authors":"E. M. Coelho, B. MacIntyre, S. Julier","doi":"10.1145/1095034.1095052","DOIUrl":"https://doi.org/10.1145/1095034.1095052","url":null,"abstract":"A significant problem encountered when building Augmented Reality (AR) systems is that all spatial knowledge about the world has uncertainty associated with it. This uncertainty manifests itself as registration errors between the graphics and the physical world, and ambiguity in user interaction. In this paper, we show how estimates of the registration error can be leveraged to support predictable selection in the presence of uncertain 3D knowledge. These ideas are demonstrated in osgAR, an extension to OpenSceneGraph with explicit support for uncertainty in the 3D transformations. The osgAR runtime propagates this uncertainty throughout the scene graph to compute robust estimates of the probable location of all entities in the system from the user's viewpoint, in real-time. We discuss the implementation of selection in osgAR, and the issues that must be addressed when creating interaction techniques in such a system.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automation and customization of rendered web pages 渲染网页的自动化和定制
Michael Bolin, Matt Webber, P. Rha, Tom Wilson, Rob Miller
{"title":"Automation and customization of rendered web pages","authors":"Michael Bolin, Matt Webber, P. Rha, Tom Wilson, Rob Miller","doi":"10.1145/1095034.1095062","DOIUrl":"https://doi.org/10.1145/1095034.1095062","url":null,"abstract":"On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user's name.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115740894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 260
Physical embodiments for mobile communication agents 移动通信代理的物理实施例
Stefan Marti, C. Schmandt
{"title":"Physical embodiments for mobile communication agents","authors":"Stefan Marti, C. Schmandt","doi":"10.1145/1095034.1095073","DOIUrl":"https://doi.org/10.1145/1095034.1095073","url":null,"abstract":"This paper describes a physically embodied and animated user interface to an interactive call handling agent, consisting of a small wireless animatronic device in the form of a squirrel, bunny, or parrot. A software tool creates movement primitives, composes these primitives into complex behaviors, and triggers these behaviors dynamically at state changes in the conversational agent's finite state machine. Gaze and gestural cues from the animatronics alert both the user and co-located third parties of incoming phone calls, and data suggests that such alerting is less intrusive than conventional telephones.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124842776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Supporting interspecies social awareness: using peripheral displays for distributed pack awareness 支持物种间的社会意识:使用外围显示器进行分布式群体意识
Demi Mankoff, A. Dey, Jennifer Mankoff, K. Mankoff
{"title":"Supporting interspecies social awareness: using peripheral displays for distributed pack awareness","authors":"Demi Mankoff, A. Dey, Jennifer Mankoff, K. Mankoff","doi":"10.1145/1095034.1095076","DOIUrl":"https://doi.org/10.1145/1095034.1095076","url":null,"abstract":"In interspecies households, it is common for the non homo sapien members to be isolated and ignored for many hours each day when humans are out of the house or working. For pack animals, such as canines, information about a pack member's extended pack interactions (outside of the nuclear household) could help to mitigate this social isolation. We have developed a Pack Activity Watch System: Allowing Broad Interspecies Love In Telecommunication with Internet-Enabled Sociability (PAWSABILITIES) for helping to support remote awareness of social activities. Our work focuses on canine companions, and includes, pawticipatory design, labradory tests, and canid camera monitoring.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127893839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Moveable interactive projected displays using projector based tracking 使用基于投影仪跟踪的可移动交互式投影显示器
J. C. Lee, S. Hudson, J. Summet, P. Dietz
{"title":"Moveable interactive projected displays using projector based tracking","authors":"J. C. Lee, S. Hudson, J. Summet, P. Dietz","doi":"10.1145/1095034.1095045","DOIUrl":"https://doi.org/10.1145/1095034.1095045","url":null,"abstract":"Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking techinque. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123582498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信