Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

筛选
英文 中文
PortraitSketch: face sketching assistance for novices PortraitSketch:为新手提供面部素描帮助
Jun Xie, Aaron Hertzmann, Wilmot Li, H. Winnemöller
{"title":"PortraitSketch: face sketching assistance for novices","authors":"Jun Xie, Aaron Hertzmann, Wilmot Li, H. Winnemöller","doi":"10.1145/2642918.2647399","DOIUrl":"https://doi.org/10.1145/2642918.2647399","url":null,"abstract":"We present PortraitSketch, an interactive drawing system that helps novices create pleasing, recognizable face sketches without requiring prior artistic training. As the user traces over a source portrait photograph, PortraitSketch automatically adjusts the geometry and stroke parameters (thickness, opacity, etc.) to improve the aesthetic quality of the sketch. We present algorithms for adjusting both outlines and shading strokes based on important features of the underlying source image. In contrast to automatic stylization systems, PortraitSketch is designed to encourage a sense of ownership and accomplishment in the user. To this end, all adjustments are performed in real-time, and the user ends up directly drawing all strokes on the canvas. The findings from our user study suggest that users prefer drawing with some automatic assistance, thereby producing better drawings, and that assistance does not decrease the perceived level of involvement in the creative process.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78603547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
World-stabilized annotations and virtual scene navigation for remote collaboration 用于远程协作的世界稳定注释和虚拟场景导航
Steffen Gauglitz, B. Nuernberger, M. Turk, Tobias Höllerer
{"title":"World-stabilized annotations and virtual scene navigation for remote collaboration","authors":"Steffen Gauglitz, B. Nuernberger, M. Turk, Tobias Höllerer","doi":"10.1145/2642918.2647372","DOIUrl":"https://doi.org/10.1145/2642918.2647372","url":null,"abstract":"We present a system that supports an augmented shared visual space for live mobile remote collaboration on physical tasks. The remote user can explore the scene independently of the local user's current camera position and can communicate via spatial annotations that are immediately visible to the local user in augmented reality. Our system operates on off-the-shelf hardware and uses real-time visual tracking and modeling, thus not requiring any preparation or instrumentation of the environment. It creates a synergy between video conferencing and remote scene exploration under a unique coherent interface. To evaluate the collaboration with our system, we conducted an extensive outdoor user study with 60 participants comparing our system with two baseline interfaces. Our results indicate an overwhelming user preference (80%) for our system, a high level of usability, as well as performance benefits compared with one of the two baselines.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81213715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 180
Going to the dogs: towards an interactive touchscreen interface for working dogs 走向狗狗:面向工作犬的交互式触摸屏界面
C. Zeagler, Scott M. Gilliland, Larry Freil, Thad Starner, M. Jackson
{"title":"Going to the dogs: towards an interactive touchscreen interface for working dogs","authors":"C. Zeagler, Scott M. Gilliland, Larry Freil, Thad Starner, M. Jackson","doi":"10.1145/2642918.2647364","DOIUrl":"https://doi.org/10.1145/2642918.2647364","url":null,"abstract":"Computer-mediated interaction for working dogs is an important new domain for interaction research. In domestic settings, touchscreens could provide a way for dogs to communicate critical information to humans. In this paper we explore how a dog might interact with a touchscreen interface. We observe dogs' touchscreen interactions and record difficulties against what is expected of humans' touchscreen interactions. We also solve hardware issues through screen adaptations and projection styles to make a touchscreen usable for a canine's nose touch interactions. We also compare our canine touch data to humans' touch data on the same system. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistive dogs in the home.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79506690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Reflection 反射
Yang Li
{"title":"Reflection","authors":"Yang Li","doi":"10.1145/2642918.2647355","DOIUrl":"https://doi.org/10.1145/2642918.2647355","url":null,"abstract":"By knowing which upcoming action a user might perform, a mobile application can optimize its user interface for accomplishing the task. However, it is technically challenging for developers to implement event prediction in their own application. We created Reflection, an on-device service that answers queries from a mobile application regarding which actions the user is likely to perform at a given time. Any application can register itself and communicate with Reflection via a simple API. Reflection continuously learns a prediction model for each application based on its evolving event history. It employs a novel method for prediction by 1) combining multiple well-designed predictors with an online learning method, and 2) capturing event patterns not only within but also across registered applications--only possible as an infrastructure solution. We evaluated Reflection with two sets of large-scale, in situ mobile event logs, which showed our infrastructure approach is feasible.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"106 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84197609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations 顶架:引导用户到一组受限的观看位置和方向的可视化技术
Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven K. Feiner, B. Tversky
{"title":"ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations","authors":"Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven K. Feiner, B. Tversky","doi":"10.1145/2642918.2647417","DOIUrl":"https://doi.org/10.1145/2642918.2647417","url":null,"abstract":"Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87647493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Sensing techniques for tablet+stylus interaction 平板电脑+触控笔交互的传感技术
K. Hinckley, M. Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, M. Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, W. Buxton, Andrew D. Wilson
{"title":"Sensing techniques for tablet+stylus interaction","authors":"K. Hinckley, M. Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, M. Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, W. Buxton, Andrew D. Wilson","doi":"10.1145/2642918.2647379","DOIUrl":"https://doi.org/10.1145/2642918.2647379","url":null,"abstract":"We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87919239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Content-aware kinetic scrolling for supporting web page navigation 支持网页导航的内容感知动态滚动
Juho Kim, Amy X. Zhang, Jihee Kim, Rob Miller, Krzysztof Z Gajos
{"title":"Content-aware kinetic scrolling for supporting web page navigation","authors":"Juho Kim, Amy X. Zhang, Jihee Kim, Rob Miller, Krzysztof Z Gajos","doi":"10.1145/2642918.2647401","DOIUrl":"https://doi.org/10.1145/2642918.2647401","url":null,"abstract":"Long documents are abundant on the web today, and are accessed in increasing numbers from touchscreen devices such as mobile phones and tablets. Navigating long documents with small screens can be challenging both physically and cognitively because they compel the user to scroll a great deal and to mentally filter for important content. To support navigation of long documents on touchscreen devices, we introduce content-aware kinetic scrolling, a novel scrolling technique that dynamically applies pseudo-haptic feedback in the form of friction around points of high interest within the page. This allows users to quickly find interesting content while exploring without further cluttering the limited visual space. To model degrees of interest (DOI) for a variety of existing web pages, we introduce social wear, a method for capturing DOI based on social signals that indicate collective user interest. Our preliminary evaluation shows that users pay attention to items with kinetic scrolling feedback during search, recognition, and skimming tasks.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87749808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
SideSwipe: detecting in-air gestures around mobile devices using actual GSM signal SideSwipe:使用实际的GSM信号检测移动设备周围的空中手势
Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak N. Patel, M. Reynolds
{"title":"SideSwipe: detecting in-air gestures around mobile devices using actual GSM signal","authors":"Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak N. Patel, M. Reynolds","doi":"10.1145/2642918.2647380","DOIUrl":"https://doi.org/10.1145/2642918.2647380","url":null,"abstract":"Current smartphone inputs are limited to physical buttons, touchscreens, cameras or built-in sensors. These approaches either require a dedicated surface or line-of-sight for interaction. We introduce SideSwipe, a novel system that enables in-air gestures both above and around a mobile device. Our system leverages the actual GSM signal to detect hand gestures around the device. We developed an algorithm to convert the discrete and bursty GSM pulses to a continuous wave that can be used for gesture recognition. Specifically, when a user waves their hand near the phone, the hand movement disturbs the signal propagation between the phone's transmitter and added receiving antennas. Our system captures this variation and uses it for gesture recognition. To evaluate our system, we conduct a study with 10 participants and present robust gesture recognition with an average accuracy of 87.2% across 14 hand gestures.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83432749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
HaptoMime: mid-air haptic interaction with a floating virtual screen HaptoMime:带有浮动虚拟屏幕的半空中触觉交互
Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, Seki Inoue, H. Shinoda
{"title":"HaptoMime: mid-air haptic interaction with a floating virtual screen","authors":"Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, Seki Inoue, H. Shinoda","doi":"10.1145/2642918.2647407","DOIUrl":"https://doi.org/10.1145/2642918.2647407","url":null,"abstract":"We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87285018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Pinch-to-zoom-plus: an enhanced pinch-to-zoom that reduces clutching and panning 捏缩放+:一个增强的捏缩放,减少抓握和平移
J. Avery, Mark Choi, Daniel Vogel, E. Lank
{"title":"Pinch-to-zoom-plus: an enhanced pinch-to-zoom that reduces clutching and panning","authors":"J. Avery, Mark Choi, Daniel Vogel, E. Lank","doi":"10.1145/2642918.2647352","DOIUrl":"https://doi.org/10.1145/2642918.2647352","url":null,"abstract":"Despite its popularity, the classic pinch-to-zoom gesture used in modern multi-touch interfaces has drawbacks: specifically, the need to support an extended range of scales and the need to keep content within the view window on the display can result in the need to clutch and pan. In two formative studies of unimanual and bimanual pinch-to-zoom, we found patterns: zooming actions follows a predictable ballistic velocity curve, and users tend to pan the point-of-interest towards the center of the screen. We apply these results to design an enhanced zooming technique called Pinch-to-Zoom-Plus (PZP) that reduces clutching and panning operations compared to standard pinch-to-zoom behaviour.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85261503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信