Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

筛选
英文 中文
Improving Virtual Keyboards When All Finger Positions Are Known 改进虚拟键盘时,所有的手指位置是已知的
Daewoong Choi, Hyeonjoong Cho, J. Cheong
{"title":"Improving Virtual Keyboards When All Finger Positions Are Known","authors":"Daewoong Choi, Hyeonjoong Cho, J. Cheong","doi":"10.1145/2807442.2807491","DOIUrl":"https://doi.org/10.1145/2807442.2807491","url":null,"abstract":"Current virtual keyboards are known to be slower and less convenient than physical QWERTY keyboards because they simply imitate the traditional QWERTY keyboards on touchscreens. In order to improve virtual keyboards, we consider two reasonable assumptions based on the observation of skilled typists. First, the keys are already assigned to each finger for typing. Based on this assumption, we suggest restricting each finger to entering pre-allocated keys only. Second, non-touching fingers move in correlation with the touching finger because of the intrinsic structure of human hands. To verify of our assumptions, we conducted two experiments with skilled typists. In the first experiment, we statistically verified the second assumption. We then suggest a novel virtual keyboard using our observations. In the second experiment, we show that our suggested keyboard outperforms existing virtual keyboards.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128649634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles 3D打印头发:软股,纤维和刷毛的熔融沉积建模
Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison
{"title":"3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles","authors":"Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison","doi":"10.1145/2807442.2807484","DOIUrl":"https://doi.org/10.1145/2807442.2807484","url":null,"abstract":"We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125380743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
PERCs: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays 电容式多点触控显示器上的持久可追踪物品
Simon Voelker, Christian Cherek, Jan Thar, Thorsten Karrer, Christian Thoresen, Kjell Ivar Øvergård, Jan O. Borchers
{"title":"PERCs: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays","authors":"Simon Voelker, Christian Cherek, Jan Thar, Thorsten Karrer, Christian Thoresen, Kjell Ivar Øvergård, Jan O. Borchers","doi":"10.1145/2807442.2807466","DOIUrl":"https://doi.org/10.1145/2807442.2807466","url":null,"abstract":"Tangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and removed it from the surface. We introduce PERCs, persistent capacitive tangibles that \"know\" whether they are currently on a capacitive touch surface or not. This is achieved by adding a small field sensor to the tangible to detect the touch screen's own, weak electromagnetic touch detection probing signal. Thus, unlike previous designs, PERCs do not get filtered out over time by the adaptive signal filters of the touch screen. We provide a technical overview of the theory be- hind PERCs and our prototype construction, and we evaluate detection rates, timing performance, and positional and angular accuracy for PERCs on a variety of unmodified, commercially available multi-touch devices.Through their affordable circuitry and high accuracy, PERCs open up the potential for a variety of new applications that use tangibles on today's ubiquitous multi-touch devices.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"493 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
TurkDeck: Physical Virtual Reality Based on People TurkDeck:基于人的物理虚拟现实
Lung-Pan Cheng, T. Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Róbert Kovács, Johannes Jasper, J. Kemper, Patrick Baudisch
{"title":"TurkDeck: Physical Virtual Reality Based on People","authors":"Lung-Pan Cheng, T. Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Róbert Kovács, Johannes Jasper, J. Kemper, Patrick Baudisch","doi":"10.1145/2807442.2807463","DOIUrl":"https://doi.org/10.1145/2807442.2807463","url":null,"abstract":"TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck produces the haptic sensation using props, i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. Unlike previous work on prop-based virtual reality, however, TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called \"human actuators\" by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them. We demonstrate TurkDeck at the example of an immersive 300m2 experience in 25m2 physical space. We show how to simulate a wide range of physical objects and effects, including walls, doors, ledges, steps, beams, switches, stompers, portals, zip lines, and wind. In a user study, participants rated the realism/immersion of TurkDeck higher than a traditional prop-less baseline condition (4.9 vs. 3.6 on 7 item Likert).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130099501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 158
Tactile Animation by Direct Manipulation of Grid Displays 直接操纵网格显示的触觉动画
Oliver S. Schneider, A. Israr, Karon E Maclean
{"title":"Tactile Animation by Direct Manipulation of Grid Displays","authors":"Oliver S. Schneider, A. Israr, Karon E Maclean","doi":"10.1145/2807442.2807470","DOIUrl":"https://doi.org/10.1145/2807442.2807470","url":null,"abstract":"Chairs, wearables, and handhelds have become popular sites for spatial tactile display. Visual animators, already expert in using time and space to portray motion, could readily transfer their skills to produce rich haptic sensations if given the right tools. We introduce the tactile animation object, a directly manipulated phantom tactile sensation. This abstraction has two key benefits: 1) efficient, creative, iterative control of spatiotemporal sensations, and 2) the potential to support a variety of tactile grids, including sparse displays. We present Mango, an editing tool for animators, including its rendering pipeline and perceptually-optimized interpolation algorithm for sparse vibrotactile grids. In our evaluation, professional animators found it easy to create a variety of vibrotactile patterns, with both experts and novices preferring the tactile animation object over controlling actuators individually.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132226004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
uniMorph: Fabricating Thin Film Composites for Shape-Changing Interfaces uniMorph:用于改变形状界面的薄膜复合材料
Felix Heibeck, B. Tome, Clark Della Silva, H. Ishii
{"title":"uniMorph: Fabricating Thin Film Composites for Shape-Changing Interfaces","authors":"Felix Heibeck, B. Tome, Clark Della Silva, H. Ishii","doi":"10.1145/2807442.2807472","DOIUrl":"https://doi.org/10.1145/2807442.2807472","url":null,"abstract":"Researchers have been investigating shape-changing interfaces, however technologies for thin, reversible shape change remain complicated to fabricate. uniMorph is an enabling technology for rapid digital fabrication of customized thin-film shape-changing interfaces. By combining the thermoelectric characteristics of copper with the high thermal expansion rate of ultra-high molecular weight polyethylene, we are able to actuate the shape of flexible circuit composites directly. The shape-changing actuation is enabled by a temperature driven mechanism and reduces the complexity of fabrication for thin shape-changing interfaces. In this paper we describe how to design and fabricate thin uniMorph composites. We present composites that are actuated by either environmental temperature changes or active heating of embedded structures and provide a systematic overview of shape-changing primitives. Finally, we present different sensing techniques that leverage the existing copper structures or can be seamlessly embedded into the uniMorph composite. To demonstrate the wide applicability of uniMorph, we present several applications in ubiquitous and mobile computing.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130505628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display 动态块:用于交互和显示的构造装配
Philipp Schoessler, Daniel Windham, Daniel Leithinger, Sean Follmer, H. Ishii
{"title":"Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display","authors":"Philipp Schoessler, Daniel Windham, Daniel Leithinger, Sean Follmer, H. Ishii","doi":"10.1145/2807442.2807453","DOIUrl":"https://doi.org/10.1145/2807442.2807453","url":null,"abstract":"Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display's ability to assemble, disassemble, and reassemble structures from simple passive building blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128844637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
These Aren't the Commands You're Looking For: Addressing False Feedforward in Feature-Rich Software 这些不是你想要的命令:在功能丰富的软件中解决错误前馈
B. Lafreniere, Parmit K. Chilana, Adam Fourney, Michael A. Terry
{"title":"These Aren't the Commands You're Looking For: Addressing False Feedforward in Feature-Rich Software","authors":"B. Lafreniere, Parmit K. Chilana, Adam Fourney, Michael A. Terry","doi":"10.1145/2807442.2807482","DOIUrl":"https://doi.org/10.1145/2807442.2807482","url":null,"abstract":"The names, icons, and tooltips of commands in feature-rich software are an important source of guidance when locating and selecting amongst commands. Unfortunately, these cues can mislead users into believing that a command is appropriate for a given task, when another command would be more appropriate, resulting in wasted time and frustration. In this paper, we present command disambiguation techniques that inform the user of alternative commands before, during, and after an incorrect command has been executed. To inform the design of these techniques, we define categories of false-feedforward errors caused by misleading interface cues, and identify causes for each. Our techniques are the first designed explicitly to solve this problem in feature-rich software. A user study showed enthusiasm for the techniques, and revealed their potential to play a key role in learning of feature-rich software.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133855342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Gaze vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative 凝视vs鼠标:一个快速和准确的凝视只能点击选择
C. Lutteroth, A. Penkar, Gerald Weber
{"title":"Gaze vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative","authors":"C. Lutteroth, A. Penkar, Gerald Weber","doi":"10.1145/2807442.2807461","DOIUrl":"https://doi.org/10.1145/2807442.2807461","url":null,"abstract":"Eye gaze tracking is a promising input method which is gradually finding its way into the mainstream. An obvious question to arise is whether it can be used for point-and-click tasks, as an alternative for mouse or touch. Pointing with gaze is both fast and natural, although its accuracy is limited. There are still technical challenges with gaze tracking, as well as inherent physiological limitations. Furthermore, providing an alternative to clicking is challenging. We are considering use cases where input based purely on gaze is desired, and the click targets are discrete user interface (UI) elements which are too small to be reliably resolved by gaze alone, e.g., links in hypertext. We present Actigaze, a new gaze-only click alternative which is fast and accurate for this scenario. A clickable user interface element is selected by dwelling on one of a set of confirm buttons, based on two main design contributions: First, the confirm buttons stay on fixed positions with easily distinguishable visual identifiers such as colors, enabling procedural learning of the confirm button position. Secondly, UI elements are associated with confirm buttons through the visual identifiers in a way which minimizes the likelihood of inadvertent clicks. We evaluate two variants of the proposed click alternative, comparing them against the mouse and another gaze-only click alternative.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125234708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
On Sounder Ground: CAAT, a Viable Widget for Affective Reaction Assessment 在更坚实的基础上:CAAT,一个可行的情感反应评估工具
Bruno Cardoso, Osvaldo Santos, T. Romão
{"title":"On Sounder Ground: CAAT, a Viable Widget for Affective Reaction Assessment","authors":"Bruno Cardoso, Osvaldo Santos, T. Romão","doi":"10.1145/2807442.2807465","DOIUrl":"https://doi.org/10.1145/2807442.2807465","url":null,"abstract":"The reliable assessment of affective reactions to stimuli is paramount in a variety of scientific fields, including HCI (Human-Computer Interaction). Variation of emotional states over time, however, warrants the need for quick measurements of emotions. To address it, new tools for quick assessments of affective states have been developed. In this work, we explore the CAAT (Circumplex Affective Assessment Tool), an instrument with a unique design in the scope of affect assessment -- a graphical control element -- that makes it amenable to seamless integration in user interfaces. We briefly describe the CAAT and present a multi-dimensional evaluation that evidences the tool's viability. We have assessed its test-retest reliability, construct validity and quickness of use, by collecting data through an unsupervised, web-based user study. Results show high test-retest reliability, evidence the tool's construct validity and confirm its quickness of use, making it a good fit for longitudinal studies and systems requiring quick assessments of emotional reactions.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信