Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

筛选
英文 中文
DoubleFlip: a motion gesture delimiter for interaction DoubleFlip:用于交互的动作手势分隔符
J. Ruiz, Yang Li
{"title":"DoubleFlip: a motion gesture delimiter for interaction","authors":"J. Ruiz, Yang Li","doi":"10.1145/1866218.1866265","DOIUrl":"https://doi.org/10.1145/1866218.1866265","url":null,"abstract":"In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"449-450"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89474452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
User guided audio selection from complex sound mixtures 用户引导音频选择从复杂的声音混合
P. Smaragdis
{"title":"User guided audio selection from complex sound mixtures","authors":"P. Smaragdis","doi":"10.1145/1622176.1622193","DOIUrl":"https://doi.org/10.1145/1622176.1622193","url":null,"abstract":"In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"89-92"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73491891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines 在现实HCI设置中使用fNIRS脑传感:实验和指南
E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob
{"title":"Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines","authors":"E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob","doi":"10.1145/1622176.1622207","DOIUrl":"https://doi.org/10.1145/1622176.1622207","url":null,"abstract":"Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"157-166"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83576584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
A screen-space formulation for 2D and 3D direct manipulation 用于2D和3D直接操作的屏幕空间公式
J. Reisman, Philip L. Davidson, Jefferson Y. Han
{"title":"A screen-space formulation for 2D and 3D direct manipulation","authors":"J. Reisman, Philip L. Davidson, Jefferson Y. Han","doi":"10.1145/1622176.1622190","DOIUrl":"https://doi.org/10.1145/1622176.1622190","url":null,"abstract":"Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"69-78"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85350830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 207
A practical pressure sensitive computer keyboard 一种实用的压敏计算机键盘
P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche
{"title":"A practical pressure sensitive computer keyboard","authors":"P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche","doi":"10.1145/1622176.1622187","DOIUrl":"https://doi.org/10.1145/1622176.1622187","url":null,"abstract":"A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76988800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices SemFeel:为移动触摸屏设备提供语义触觉反馈的用户界面
K. Yatani, K. Truong
{"title":"SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices","authors":"K. Yatani, K. Truong","doi":"10.1145/1622176.1622198","DOIUrl":"https://doi.org/10.1145/1622176.1622198","url":null,"abstract":"One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"65 1","pages":"111-120"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72979757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 177
Enabling always-available input with muscle-computer interfaces 通过肌肉计算机接口实现始终可用的输入
T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay
{"title":"Enabling always-available input with muscle-computer interfaces","authors":"T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay","doi":"10.1145/1622176.1622208","DOIUrl":"https://doi.org/10.1145/1622176.1622208","url":null,"abstract":"Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"22 1","pages":"167-176"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78719783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 325
PhotoelasticTouch: transparent rubbery tangible interface using an LCD and photoelasticity PhotoelasticTouch:使用LCD和光弹性的透明橡胶有形界面
Toshiki Sato, Haruko Mamiya, H. Koike, K. Fukuchi
{"title":"PhotoelasticTouch: transparent rubbery tangible interface using an LCD and photoelasticity","authors":"Toshiki Sato, Haruko Mamiya, H. Koike, K. Fukuchi","doi":"10.1145/1622176.1622185","DOIUrl":"https://doi.org/10.1145/1622176.1622185","url":null,"abstract":"PhotoelasticTouch is a novel tabletop system designed to intuitively facilitate touch-based interaction via real objects made from transparent elastic material. The system utilizes vision-based recognition techniques and the photoelastic properties of the transparent rubber to recognize deformed regions of the elastic material. Our system works with elastic materials over a wide variety of shapes and does not require any explicit visual markers. Compared to traditional interactive surfaces, our 2.5 dimensional interface system enables direct touch interaction and soft tactile feedback. In this paper we present our force sensing technique using photoelasticity and describe the implementation of our prototype system. We also present three practical applications of PhotoelasticTouch, a force-sensitive touch panel, a tangible face application, and a paint application.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"43-50"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87630358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Changing how people view changes on the web 改变人们对网络变化的看法
J. Teevan, S. Dumais, Daniel J. Liebling, Richard L. Hughes
{"title":"Changing how people view changes on the web","authors":"J. Teevan, S. Dumais, Daniel J. Liebling, Richard L. Hughes","doi":"10.1145/1622176.1622221","DOIUrl":"https://doi.org/10.1145/1622176.1622221","url":null,"abstract":"The Web is a dynamic information environment. Web content changes regularly and people revisit Web pages frequently. But the tools used to access the Web, including browsers and search engines, do little to explicitly support these dynamics. In this paper we present DiffIE, a browser plug-in that makes content change explicit in a simple and lightweight manner. DiffIE caches the pages a person visits and highlights how those pages have changed when the person returns to them. We describe how we built a stable, reliable, and usable system, including how we created compact, privacy-preserving page representations to support fast difference detection. Via a longitudinal user study, we explore how DiffIE changed the way people dealt with changing content. We find that much of its benefit came not from exposing expected change, but rather from drawing attention to unexpected change and helping people build a richer understanding of the Web content they frequent.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"46 3 1","pages":"237-246"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81036539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mining web interactions to automatically create mash-ups 挖掘网络交互来自动创建混搭
Jeffrey P. Bigham, R. S. Kaminsky, Jeffrey Nichols
{"title":"Mining web interactions to automatically create mash-ups","authors":"Jeffrey P. Bigham, R. S. Kaminsky, Jeffrey Nichols","doi":"10.1145/1622176.1622215","DOIUrl":"https://doi.org/10.1145/1622176.1622215","url":null,"abstract":"The deep web contains an order of magnitude more information than the surface web, but that information is hidden behind the web forms of a large number of web sites. Metasearch engines can help users explore this information by aggregating results from multiple resources, but previously these could only be created and maintained by programmers. In this paper, we explore the automatic creation of metasearch mash-ups by mining the web interactions of multiple web users to find relations between query forms on different web sites. We also present an implemented system called TX2 that uses those connections to search multiple deep web resources simultaneously and integrate the results in context in a single results page. TX2 illustrates the promise of constructing mash-ups automatically and the potential of mining web interactions to explore deep web resources.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"260 1","pages":"203-212"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79638321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信