Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

筛选
英文 中文
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements 轨道:使用平滑追踪眼球运动的智能手表的凝视交互
Augusto Esteves, Eduardo Velloso, A. Bulling, Hans-Werner Gellersen
{"title":"Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements","authors":"Augusto Esteves, Eduardo Velloso, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2807442.2807499","DOIUrl":"https://doi.org/10.1145/2807442.2807499","url":null,"abstract":"We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the technique's recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls -- very unusual in current HCI interfaces -- these were generally well received by participants in a third and final study.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130342108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 208
Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG 利用前臂肌电图利用双可观察输入进行细粒度拇指交互
D. Huang, Xiaoyi Zhang, T. S. Saponas, J. Fogarty, Shyamnath Gollakota
{"title":"Leveraging Dual-Observable Input for Fine-Grained Thumb Interaction Using Forearm EMG","authors":"D. Huang, Xiaoyi Zhang, T. S. Saponas, J. Fogarty, Shyamnath Gollakota","doi":"10.1145/2807442.2807506","DOIUrl":"https://doi.org/10.1145/2807442.2807506","url":null,"abstract":"We introduce the first forearm-based EMG input system that can recognize fine-grained thumb gestures, including left swipes, right swipes, taps, long presses, and more complex thumb motions. EMG signals for thumb motions sensed from the forearm are quite weak and require significant training data to classify. We therefore also introduce a novel approach for minimally-intrusive collection of labeled training data for always-available input devices. Our dual-observable input approach is based on the insight that interaction observed by multiple devices allows recognition by a primary device (e.g., phone recognition of a left swipe gesture) to create labeled training examples for another (e.g., forearm-based EMG data labeled as a left swipe). We implement a wearable prototype with dry EMG electrodes, train with labeled demonstrations from participants using their own phones, and show that our prototype can recognize common fine-grained thumb gestures and user-defined complex gestures.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127896262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
RevoMaker: Enabling Multi-directional and Functionally-embedded 3D printing using a Rotational Cuboidal Platform RevoMaker:使用旋转立方体平台实现多向和功能嵌入式3D打印
Wei Gao, Yunbo Zhang, Diogo C. Nazzetta, K. Ramani, R. Cipra
{"title":"RevoMaker: Enabling Multi-directional and Functionally-embedded 3D printing using a Rotational Cuboidal Platform","authors":"Wei Gao, Yunbo Zhang, Diogo C. Nazzetta, K. Ramani, R. Cipra","doi":"10.1145/2807442.2807476","DOIUrl":"https://doi.org/10.1145/2807442.2807476","url":null,"abstract":"In recent years, 3D printing has gained significant attention from the maker community, academia, and industry to support low-cost and iterative prototyping of designs. Current unidirectional extrusion systems require printing sacrificial material to support printed features such as overhangs. Furthermore, integrating functions such as sensing and actuation into these parts requires additional steps and processes to create \"functional enclosures\", since design functionality cannot be easily embedded into prototype printing. All of these factors result in relatively high design iteration times. We present \"RevoMaker\", a self-contained 3D printer that creates direct out-of-the-printer functional prototypes, using less build material and with substantially less reliance on support structures. By modifying a standard low-cost FDM printer with a revolving cuboidal platform and printing partitioned geometries around cuboidal facets, we achieve a multidirectional additive prototyping process to reduce the print and support material use. Our optimization framework considers various orientations and sizes for the cuboidal base. The mechanical, electronic, and sensory components are preassembled on the flattened laser-cut facets and enclosed inside the cuboid when closed. We demonstrate RevoMaker directly printing a variety of customized and fully-functional product prototypes, such as computer mice and toys, thus illustrating the new affordances of 3D printing for functional product design.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117011780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Tiltcasting: 3D Interaction on Large Displays using a Mobile Device 倾斜投射:使用移动设备在大型显示器上进行3D交互
Krzysztof Pietroszek, James R. Wallace, E. Lank
{"title":"Tiltcasting: 3D Interaction on Large Displays using a Mobile Device","authors":"Krzysztof Pietroszek, James R. Wallace, E. Lank","doi":"10.1145/2807442.2807471","DOIUrl":"https://doi.org/10.1145/2807442.2807471","url":null,"abstract":"We develop and formally evaluate a metaphor for smartphone interaction with 3D environments: Tiltcasting. Under the Tiltcasting metaphor, users interact within a rotatable 2D plane that is \"cast\" from their phone's interactive display into 3D space. Through an empirical validation, we show that Tiltcasting supports efficient pointing, interaction with occluded objects, disambiguation between nearby objects, and object selection and manipulation in fully addressable 3D space. Our technique out-performs existing target agnostic pointing implementations, and approaches the performance of physical pointing with an off-the-shelf smartphone.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121542779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Projectibles: Optimizing Surface Color For Projection 投影:优化表面颜色投影
Brett R. Jones, Rajinder Sodhi, Pulkit Budhiraja, Kevin Karsch, B. Bailey, D. Forsyth
{"title":"Projectibles: Optimizing Surface Color For Projection","authors":"Brett R. Jones, Rajinder Sodhi, Pulkit Budhiraja, Kevin Karsch, B. Bailey, D. Forsyth","doi":"10.1145/2807442.2807486","DOIUrl":"https://doi.org/10.1145/2807442.2807486","url":null,"abstract":"Typically video projectors display images onto white screens, which can result in a washed out image. Projectibles algorithmically control the display surface color to increase the contrast and resolution. By combining a printed image with projected light, we can create animated, high resolution, high dynamic range visual experiences for video sequences. We present two algorithms for separating an input video sequence into a printed component and projected component, maximizing the combined contrast and resolution while minimizing any visual artifacts introduced from the decomposition. We present empirical measurements of real-world results of six example video sequences, subjective viewer feedback ratings, and we discuss the benefits and limitations of Projectibles. This is the first approach to combine a static display with a dynamic display for the display of video, and the first to optimize surface color for projection of video.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131558198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication 改革:通过双向制造整合物理和数字设计
Christian Weichel, John Hardy, Jason Alexander, Hans-Werner Gellersen
{"title":"ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication","authors":"Christian Weichel, John Hardy, Jason Alexander, Hans-Werner Gellersen","doi":"10.1145/2807442.2807451","DOIUrl":"https://doi.org/10.1145/2807442.2807451","url":null,"abstract":"Digital fabrication machines such as 3D printers and laser-cutters allow users to produce physical objects based on virtual models. The creation process is currently unidirectional: once an object is fabricated it is separated from its originating virtual model. Consequently, users are tied into digital modeling tools, the virtual design must be completed before fabrication, and once fabricated, re-shaping the physical object no longer influences the digital model. To provide a more flexible design process that allows objects to iteratively evolve through both digital and physical input, we introduce bidirectional fabrication. To demonstrate the concept, we built ReForm, a system that integrates digital modeling with shape input, shape output, annotation for machine commands, and visual output. By continually synchronizing the physical object and digital model it supports object versioning to allow physical changes to be undone. Through application examples, we demonstrate the benefits of ReForm to the digital fabrication process.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
BackHand: Sensing Hand Gestures via Back of the Hand 反手:通过手背感应手势
Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, Mike Y. Chen
{"title":"BackHand: Sensing Hand Gestures via Back of the Hand","authors":"Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, Mike Y. Chen","doi":"10.1145/2807442.2807462","DOIUrl":"https://doi.org/10.1145/2807442.2807462","url":null,"abstract":"In this paper, we explore using the back of hands for sensing hand gestures, which interferes less than glove-based approaches and provides better recognition than sensing at wrists and forearms. Our prototype, BackHand, uses an array of strain gauge sensors affixed to the back of hands, and applies machine learning techniques to recognize a variety of hand gestures. We conducted a user study with 10 participants to better understand gesture recognition accuracy and the effects of sensing locations. Results showed that sensor reading patterns differ significantly across users, but are consistent for the same user. The leave-one-user-out accuracy is low at an average of 27.4%, but reaches 95.8% average accuracy for 16 popular hand gestures when personalized for each participant. The most promising location spans the 1/8~1/4 area between the metacarpophalangeal joints (MCP, the knuckles between the hand and fingers) and the head of ulna (tip of the wrist).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124089295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Patching Physical Objects 修补物理对象
Alexander Teibrich, Stefanie Müller, François Guimbretière, Róbert Kovács, Stefan Neubert, Patrick Baudisch
{"title":"Patching Physical Objects","authors":"Alexander Teibrich, Stefanie Müller, François Guimbretière, Róbert Kovács, Stefan Neubert, Patrick Baudisch","doi":"10.1145/2807442.2807467","DOIUrl":"https://doi.org/10.1145/2807442.2807467","url":null,"abstract":"Personal fabrication is currently a one-way process: Once an object has been fabricated with a 3D printer, it cannot be changed anymore; any change requires printing a new version from scratch. The problem is that this approach ignores the nature of design iteration, i.e. that in subsequent iterations large parts of an object stay the same and only small parts change. This makes fabricating from scratch feel unnecessary and wasteful. In this paper, we propose a different approach: instead of re-printing the entire object from scratch, we suggest patching the existing object to reflect the next design iteration. We built a system on top of a 3D printer that accomplishes this: Users mount the existing object into the 3D printer, then load both the original and the modified 3D model into our software, which in turn calculates how to patch the object. After identifying which parts to remove and what to add, our system locates the existing object in the printer using the system's built-in 3D scanner. After calibrating the orientation, a mill first removes the outdated geometry, then a print head prints the new geometry in place. Since only a fraction of the entire object is refabricated, our approach reduces material consumption and plastic waste (for our example objects by 82% and 93% respectively).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123525473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming Codeopticon:计算机编程的实时、一对多人工辅导
Philip J. Guo
{"title":"Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming","authors":"Philip J. Guo","doi":"10.1145/2807442.2807469","DOIUrl":"https://doi.org/10.1145/2807442.2807469","url":null,"abstract":"One-on-one tutoring from a human expert is an effective way for novices to overcome learning barriers in complex domains such as computer programming. But there are usually far fewer experts than learners. To enable a single expert to help more learners at once, we built Codeopticon, an interface that enables a programming tutor to monitor and chat with dozens of learners in real time. Each learner codes in a workspace that consists of an editor, compiler, and visual debugger. The tutor sees a real-time view of each learner's actions on a dashboard, with each learner's workspace summarized in a tile. At a glance, the tutor can see how learners are editing and debugging their code, and what errors they are encountering. The dashboard automatically reshuffles tiles so that the most active learners are always in the tutor's main field of view. When the tutor sees that a particular learner needs help, they can open an embedded chat window to start a one-on-one conversation. A user study showed that 8 first-time Codeopticon users successfully tutored anonymous learners from 54 countries in a naturalistic online setting. On average, in a 30-minute session, each tutor monitored 226 learners, started 12 conversations, exchanged 47 chats, and helped 2.4 learners.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122585544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint LineFORM:用于显示、交互和约束的驱动曲线接口
Ken Nakagaki, Sean Follmer, H. Ishii
{"title":"LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint","authors":"Ken Nakagaki, Sean Follmer, H. Ishii","doi":"10.1145/2807442.2807452","DOIUrl":"https://doi.org/10.1145/2807442.2807452","url":null,"abstract":"In this paper we explore the design space of actuated curve interfaces, a novel class of shape changing-interfaces. Physical curves have several interesting characteristics from the perspective of interaction design: they have a variety of inherent affordances; they can easily represent abstract data; and they can act as constraints, boundaries, or borderlines. By utilizing such aspects of lines and curves, together with the added capability of shape-change, new possibilities for display, interaction and body constraint are possible. In order to investigate these possibilities we have implemented two actuated curve interfaces at different scales. LineFORM, our implementation, inspired by serpentine robotics, is comprised of a series chain of 1DOF servo motors with integrated sensors for direct manipulation. To motivate this work we present various applications such as shape changing cords, mobiles, body constraints, and data manipulation tools.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125401893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信