Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

筛选
英文 中文
Session details: Session 5A: Statistics and Interactive Machine Learning 会议详情:5A:统计和交互式机器学习
Scott R. Klemmer
{"title":"Session details: Session 5A: Statistics and Interactive Machine Learning","authors":"Scott R. Klemmer","doi":"10.1145/3368377","DOIUrl":"https://doi.org/10.1145/3368377","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128424510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrivateTalk: Activating Voice Input with Hand-On-Mouth Gesture Detected by Bluetooth Earphones PrivateTalk:激活语音输入与手对嘴的手势检测蓝牙耳机
Yukang Yan, Chun Yu, Yingtian Shi, Minxing Xie
{"title":"PrivateTalk: Activating Voice Input with Hand-On-Mouth Gesture Detected by Bluetooth Earphones","authors":"Yukang Yan, Chun Yu, Yingtian Shi, Minxing Xie","doi":"10.1145/3332165.3347950","DOIUrl":"https://doi.org/10.1145/3332165.3347950","url":null,"abstract":"We introduce PrivateTalk, an on-body interaction technique that allows users to activate voice input by performing the Hand-On-Mouth gesture during speaking. The gesture is performed as a hand partially covering the mouth from one side. PrivateTalk provides two benefits simultaneously. First, it enhances privacy by reducing the spread of voice while also concealing the lip movements from the view of other people in the environment. Second, the simple gesture removes the need for speaking wake-up words and is more accessible than a physical/software button especially when the device is not in the user's hands. To recognize the Hand-On-Mouth gesture, we propose a novel sensing technique that leverages the difference of signals received by two Bluetooth earphones worn on the left and right ear. Our evaluation shows that the gesture can be accurately detected and users consistently like PrivateTalk and consider it intuitive and effective.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"84 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
3D Printed Fabric: Techniques for Design and 3D Weaving Programmable Textiles 3D打印织物:设计和3D编织可编程纺织品的技术
Haruki Takahashi, Jeeeun Kim
{"title":"3D Printed Fabric: Techniques for Design and 3D Weaving Programmable Textiles","authors":"Haruki Takahashi, Jeeeun Kim","doi":"10.1145/3332165.3347896","DOIUrl":"https://doi.org/10.1145/3332165.3347896","url":null,"abstract":"We present a technique for fabricating soft and flexible textiles using a consumer grade fused deposition modeling (FDM) 3D printer. By controlling the movement of the print header, the FDM alternately weaves the stringing fibers across a row of pillars. Owing to the structure of the fibers, which supports and strengthens the pillars from each side, a 3D printer can print a thin sheet of fabric in an upright position while the fibers are being woven. In addition, this technique enables users to employ materials with various colors and/or properties when designing a pattern, and to prototype an interactive object using a variety of off-the-shelf materials such as a conductive filament. We also describe a technique for weaving textiles and introduce a list of parameters that enable users to design their own textile variations. Finally, we demonstrate examples showing the feasibility of our approach as well as numerous applications integrating printed textiles in a custom object design.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130072225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Opisthenar
H. Yeo, Erwin Wu, Juyoung Lee, Aaron Quigley, H. Koike
{"title":"Opisthenar","authors":"H. Yeo, Erwin Wu, Juyoung Lee, Aaron Quigley, H. Koike","doi":"10.1145/3332165.3347867","DOIUrl":"https://doi.org/10.1145/3332165.3347867","url":null,"abstract":"We introduce a vision-based technique to recognize static hand poses and dynamic finger tapping gestures. Our approach employs a camera on the wrist, with a view of the opisthenar (back of the hand) area. We envisage such cameras being included in a wrist-worn device such as a smartwatch, fitness tracker or wristband. Indeed, selected off-the-shelf smartwatches now incorporate a built-in camera on the side for photography purposes. However, in this configuration, the fingers are occluded from the view of the camera. The oblique angle and placement of the camera make typical vision-based techniques difficult to adopt. Our alternative approach observes small movements and changes in the shape, tendons, skin and bones on the opisthenar area. We train deep neural networks to recognize both hand poses and dynamic finger tapping gestures. While this is a challenging configuration for sensing, we tested the recognition with a real-time user test and achieved a high recognition rate of 89.4% (static poses) and 67.5% (dynamic gestures). Our results further demonstrate that our approach can generalize across sessions and to new users. Namely, users can remove and replace the wrist-worn device while new users can employ a previously trained system, to a certain degree. We conclude by demonstrating three applications and suggest future avenues of work based on sensing the back of the hand.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122086985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality 梦行者:用虚拟现实代替真实世界的行走体验
Jackie Yang, Christian Holz, E. Ofek, Andrew D. Wilson
{"title":"DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality","authors":"Jackie Yang, Christian Holz, E. Ofek, Andrew D. Wilson","doi":"10.1145/3332165.3347875","DOIUrl":"https://doi.org/10.1145/3332165.3347875","url":null,"abstract":"We explore a future in which people spend considerably more time in virtual reality, even during moments when they transition between locations in the real world. In this paper, we present DreamWalker, a VR system that enables such real-world walking while users explore and stay fully immersed inside large virtual environments in a headset. Provided with a real-world destination, DreamWalker finds a similar path in a pre-authored VR environment and guides the user while real-walking the virtual world. To keep the user from colliding with objects and people in the real-world, DreamWalker's tracking system fuses GPS locations, inside-out tracking, and RGBD frames to 1) continuously and accurately position the user in the real world, 2) sense walkable paths and obstacles in real time, and 3) represent paths through a dynamically changing scene in VR to redirect the user towards the chosen destination. We demonstrate DreamWalker's versatility by enabling users to walk three paths across the large Microsoft campus while enjoying pre-authored VR worlds, supplemented with a variety of obstacle avoidance and redirection techniques. In our evaluation, 8 participants walked across campus along a 15-minute route, experiencing a lively virtual Manhattan that was full of animated cars, people, and other objects.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122968324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Session details: Session 8A: Sensing 会话详细信息:会话8A:传感
Gierad Laput
{"title":"Session details: Session 8A: Sensing","authors":"Gierad Laput","doi":"10.1145/3368383","DOIUrl":"https://doi.org/10.1145/3368383","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123381833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ondulé: Designing and Controlling 3D Printable Springs 设计和控制3D打印弹簧
Liang He, Huaishu Peng, Michelle Lin, Ravikanth Konjeti, François Guimbretière, Jon E. Froehlich
{"title":"Ondulé: Designing and Controlling 3D Printable Springs","authors":"Liang He, Huaishu Peng, Michelle Lin, Ravikanth Konjeti, François Guimbretière, Jon E. Froehlich","doi":"10.1145/3332165.3347951","DOIUrl":"https://doi.org/10.1145/3332165.3347951","url":null,"abstract":"We present Ondulé-an interactive design tool that allows novices to create parameterizable deformation behaviors in 3D-printable models using helical springs and embedded joints. Informed by spring theory and our empirical mechanical experiments, we introduce spring and joint-based design techniques that support a range of parameterizable deformation behaviors, including compress, extend, twist, bend, and various combinations. To enable users to design and add these deformations to their models, we introduce a custom design tool for Rhino. Here, users can convert selected geometries into springs, customize spring stiffness, and parameterize their design with mechanical constraints for desired behaviors. To demonstrate the feasibility of our approach and the breadth of new designs that it enables, we showcase a set of example 3D-printed applications from launching rocket toys to tangible storytelling props. We conclude with a discussion of key challenges and open research questions.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134131171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
MagicalHands MagicalHands
Rahul Arora, Rubaiat Habib Kazi, D. Kaufman, Wilmot Li, Karan P. Singh
{"title":"MagicalHands","authors":"Rahul Arora, Rubaiat Habib Kazi, D. Kaufman, Wilmot Li, Karan P. Singh","doi":"10.1145/3332165.3347942","DOIUrl":"https://doi.org/10.1145/3332165.3347942","url":null,"abstract":"We explore the use of hand gestures for authoring animations in virtual reality (VR). We first perform a gesture elicitation study to understand user preferences for a spatiotemporal, bare-handed interaction system in VR. Specifically, we focus on creating and editing dynamic, physical phenomena (e.g., particle systems, deformations, coupling), where the mapping from gestures to animation is ambiguous and indirect. We present commonly observed mid-air gestures from the study that cover a wide range of interaction techniques, from direct manipulation to abstract demonstrations. To this end, we extend existing gesture taxonomies to the rich spatiotemporal interaction space of the target domain and distill our findings into a set of guidelines that inform the design of natural user interfaces for VR animation. Finally, based on our guidelines, we develop a proof-of-concept gesture-based VR animation system, MagicalHands. Our results, as well as feedback from user evaluation, suggest that the expressive qualities of hand gestures help users animate more effectively in VR.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115834086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mantis: A Scalable, Lightweight and Accessible Architecture to Build Multiform Force Feedback Systems 螳螂:一个可扩展的,轻量级的和可访问的架构,以建立多形式的力反馈系统
G. Barnaby, A. Roudaut
{"title":"Mantis: A Scalable, Lightweight and Accessible Architecture to Build Multiform Force Feedback Systems","authors":"G. Barnaby, A. Roudaut","doi":"10.1145/3332165.3347909","DOIUrl":"https://doi.org/10.1145/3332165.3347909","url":null,"abstract":"Mantis is a highly scalable system architecture that democratizes haptic devices by enabling designers to create accurate, multiform and accessible force feedback systems. Mantis uses brushless DC motors, custom electronic controllers, and an admittance control scheme to achieve stable high-quality haptic rendering. It enables common desktop form factors but also: large workspaces (multiple arm lengths), multiple arm workspaces, and mobile workspaces. It also uses accessible components and costs significantly less than typical high-fidelity force feedback solutions which are often confined to research labs. We present our design and show that Mantis can reproduce the haptic fidelity of common robotic arms. We demonstrate its multiform ability by implementing five systems: a single desktop-sized device, a single large workspace device, a large workspace system with four points of feedback, a mobile system and a wearable one.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132181953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Session details: Session 1A: Knitting, Weaving, Fabrics 会议详情:会议1A:针织,织造,面料
E. Whiting
{"title":"Session details: Session 1A: Knitting, Weaving, Fabrics","authors":"E. Whiting","doi":"10.1145/3368369","DOIUrl":"https://doi.org/10.1145/3368369","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133861499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信