Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

筛选
英文 中文
Graffiti fur: turning your carpet into a computer display 涂鸦皮毛:把你的地毯变成电脑显示屏
Yuta Sugiura, Koki Toda, T. Hoshi, Youichi Kamiyama, T. Igarashi, M. Inami
{"title":"Graffiti fur: turning your carpet into a computer display","authors":"Yuta Sugiura, Koki Toda, T. Hoshi, Youichi Kamiyama, T. Igarashi, M. Inami","doi":"10.1145/2642918.2647370","DOIUrl":"https://doi.org/10.1145/2642918.2647370","url":null,"abstract":"We devised a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction, and then draw lines by raising the fibers by moving the finger in the opposite direction. These material properties can be found in various items such as carpets in our living environments. We have developed three different devices to draw patterns on a \"fur display\" utilizing this phenomenon: a roller device, a pen device and pressure projection device. Our technology can turn ordinary objects in our environment into rewritable displays without requiring or creating any non-reversible modifications to them. In addition, it can be used to present large-scale image without glare, and the images it creates require no running costs to maintain.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76517993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
In-air gestures around unmodified mobile devices 在未修改的移动设备周围的空中手势
Jie Song, Gábor Sörös, Fabrizio Pece, S. Fanello, S. Izadi, Cem Keskin, Otmar Hilliges
{"title":"In-air gestures around unmodified mobile devices","authors":"Jie Song, Gábor Sörös, Fabrizio Pece, S. Fanello, S. Izadi, Cem Keskin, Otmar Hilliges","doi":"10.1145/2642918.2647373","DOIUrl":"https://doi.org/10.1145/2642918.2647373","url":null,"abstract":"We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73407728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 175
Expert crowdsourcing with flash teams 与flash团队的专家众包
Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa A. Valentine, Michael S. Bernstein
{"title":"Expert crowdsourcing with flash teams","authors":"Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa A. Valentine, Michael S. Bernstein","doi":"10.1145/2642918.2647409","DOIUrl":"https://doi.org/10.1145/2642918.2647409","url":null,"abstract":"We introduce flash teams, a framework for dynamically assembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accomplishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked modular tasks and handoffs that can be computationally managed. Interactive systems reason about and manipulate these teams' structures: for example, flash teams can be recombined to form larger organizations and authored automatically in response to a user's request. Flash teams can also hire more people elastically in reaction to task needs, and pipeline intermediate output to accelerate completion times. To enable flash teams, we present Foundry, an end-user authoring platform and runtime manager. Foundry allows users to author modular tasks, then manages teams through handoffs of intermediate work. We demonstrate that Foundry and flash teams enable crowdsourcing of a broad class of goals including design prototyping, course development, and film animation, in half the work time of traditional self-managed teams.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79236465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 222
FlexSense: a transparent self-sensing deformable surface FlexSense:一种透明的自传感可变形表面
Christian Rendl, David Kim, S. Fanello, Patrick Parzer, Christoph Rhemann, Jonathan Taylor, M. Zirkl, G. Scheipl, T. Rothländer, M. Haller, S. Izadi
{"title":"FlexSense: a transparent self-sensing deformable surface","authors":"Christian Rendl, David Kim, S. Fanello, Patrick Parzer, Christoph Rhemann, Jonathan Taylor, M. Zirkl, G. Scheipl, T. Rothländer, M. Haller, S. Izadi","doi":"10.1145/2642918.2647405","DOIUrl":"https://doi.org/10.1145/2642918.2647405","url":null,"abstract":"We present FlexSense, a new thin-film, transparent sensing surface based on printed piezoelectric sensors, which can reconstruct complex deformations without the need for any external sensing, such as cameras. FlexSense provides a fully self-contained setup which improves mobility and is not affected from occlusions. Using only a sparse set of sensors, printed on the periphery of the surface substrate, we devise two new algorithms to fully reconstruct the complex deformations of the sheet, using only these sparse sensor measurements. An evaluation shows that both proposed algorithms are capable of reconstructing complex deformations accurately. We demonstrate how FlexSense can be used for a variety of 2.5D interactions, including as a transparent cover for tablets where bending can be performed alongside touch to enable magic lens style effects, layered input, and mode switching, as well as the ability to use our device as a high degree-of-freedom input controller for gaming and beyond.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80709979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Skin buttons: cheap, small, low-powered and clickable fixed-icon laser projectors 皮肤按钮:便宜,小,低功率,可点击的固定图标激光投影仪
Gierad Laput, R. Xiao, Xiang 'Anthony' Chen, S. Hudson, Chris Harrison
{"title":"Skin buttons: cheap, small, low-powered and clickable fixed-icon laser projectors","authors":"Gierad Laput, R. Xiao, Xiang 'Anthony' Chen, S. Hudson, Chris Harrison","doi":"10.1145/2642918.2647356","DOIUrl":"https://doi.org/10.1145/2642918.2647356","url":null,"abstract":"Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user's skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these 'skin buttons' can have high touch accuracy and recognizability, while being low cost and power-efficient.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75110664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 119
WirePrint: 3D printed previews for fast prototyping WirePrint: 3D打印预览快速原型
Stefanie Müller, Sangha Im, Serafima Gurevich, Alexander Teibrich, Lisa Pfisterer, François Guimbretière, Patrick Baudisch
{"title":"WirePrint: 3D printed previews for fast prototyping","authors":"Stefanie Müller, Sangha Im, Serafima Gurevich, Alexander Teibrich, Lisa Pfisterer, François Guimbretière, Patrick Baudisch","doi":"10.1145/2642918.2647359","DOIUrl":"https://doi.org/10.1145/2642918.2647359","url":null,"abstract":"Even though considered a rapid prototyping tool, 3D printing is so slow that a reasonably sized object requires printing overnight. This slows designers down to a single iteration per day. In this paper, we propose to instead print low-fidelity wireframe previews in the early stages of the design process. Wireframe previews are 3D prints in which surfaces have been replaced with a wireframe mesh. Since wireframe previews are to scale and represent the overall shape of the 3D object, they allow users to quickly verify key aspects of their 3D design, such as the ergonomic fit. To maximize the speed-up, we instruct 3D printers to extrude filament not layer-by-layer, but directly in 3D-space, allowing them to create the edges of the wireframe model directly one stroke at a time. This allows us to achieve speed-ups of up to a factor of 10 compared to traditional layer-based printing. We demonstrate how to achieve wireframe previews on standard FDM 3D printers, such as the PrintrBot or the Kossel mini. Users only need to install the WirePrint software, making our approach applicable to many 3D printers already in use today. Finally, wireframe previews use only a fraction of material required for a regular print, making it even more affordable to iterate.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74118084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 206
Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration 物理远程呈现:形状捕获和显示为具体的,计算机中介远程协作
Daniel Leithinger, Sean Follmer, A. Olwal, Hiroshi Ishii
{"title":"Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration","authors":"Daniel Leithinger, Sean Follmer, A. Olwal, Hiroshi Ishii","doi":"10.1145/2642918.2647377","DOIUrl":"https://doi.org/10.1145/2642918.2647377","url":null,"abstract":"We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86320685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Generating emotionally relevant musical scores for audio stories 为音频故事生成情感相关的乐谱
Steve Rubin, Maneesh Agrawala
{"title":"Generating emotionally relevant musical scores for audio stories","authors":"Steve Rubin, Maneesh Agrawala","doi":"10.1145/2642918.2647406","DOIUrl":"https://doi.org/10.1145/2642918.2647406","url":null,"abstract":"Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85311265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Video lens: rapid playback and exploration of large video collections and associated metadata 视频镜头:快速回放和探索大型视频集合和相关元数据
Justin Matejka, Tovi Grossman, G. Fitzmaurice
{"title":"Video lens: rapid playback and exploration of large video collections and associated metadata","authors":"Justin Matejka, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2642918.2647366","DOIUrl":"https://doi.org/10.1145/2642918.2647366","url":null,"abstract":"We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The individual UI elements are linked and highly interactive, supporting a faceted search paradigm and encouraging exploration of the data set. We demonstrate the capabilities and specific scenarios of Video Lens within the domain of professional baseball videos. A user study with 12 participants indicates that Video Lens efficiently supports a diverse range of powerful yet desirable video query tasks, while a series of interviews with professionals in the field demonstrates the framework's benefits and future potential.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91233751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Deconstructing and restyling D3 visualizations 解构和重新设计D3可视化
Jonathan Harper, Maneesh Agrawala
{"title":"Deconstructing and restyling D3 visualizations","authors":"Jonathan Harper, Maneesh Agrawala","doi":"10.1145/2642918.2647411","DOIUrl":"https://doi.org/10.1145/2642918.2647411","url":null,"abstract":"The D3 JavaScript library has become a ubiquitous tool for developing visualizations on the Web. Yet, once a D3 visualization is published online its visual style is difficult to change. We present a pair of tools for deconstructing and restyling existing D3 visualizations. Our deconstruction tool analyzes a D3 visualization to extract the data, the marks and the mappings between them. Our restyling tool lets users modify the visual attributes of the marks as well as the mappings from the data to these attributes. Together our tools allow users to easily modify D3 visualizations without examining the underlying code and we show how they can be used to deconstruct and restyle a variety of D3 visualizations.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81125148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信