Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

筛选
英文 中文
Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects 摩羯座:一个制造管道设计和3D打印电容触摸传感器的交互式对象
Martin Schmitz, Mohammadreza Khalilbeigi, Matthias Balwierz, Roman Lissermann, M. Mühlhäuser, Jürgen Steimle
{"title":"Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects","authors":"Martin Schmitz, Mohammadreza Khalilbeigi, Matthias Balwierz, Roman Lissermann, M. Mühlhäuser, Jürgen Steimle","doi":"10.1145/2807442.2807503","DOIUrl":"https://doi.org/10.1145/2807442.2807503","url":null,"abstract":"3D printing is widely used to physically prototype the look and feel of 3D objects. Interaction possibilities of these prototypes, however, are often limited to mechanical parts or post-assembled electronics. In this paper, we present Capricate, a fabrication pipeline that enables users to easily design and 3D print highly customized objects that feature embedded capacitive multi-touch sensing. The object is printed in a single pass using a commodity multi-material 3D printer. To enable touch input on a wide variety of 3D printable surfaces, we contribute two techniques for designing and printing embedded sensors of custom shape. The fabrication pipeline is technically validated by a series of experiments and practically validated by a set of example applications. They demonstrate the wide applicability of Capricate for interactive objects.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121205994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 154
Capture-Time Feedback for Recording Scripted Narration 记录脚本叙述的捕获时间反馈
Steve Rubin, Floraine Berthouzoz, G. Mysore, Maneesh Agrawala
{"title":"Capture-Time Feedback for Recording Scripted Narration","authors":"Steve Rubin, Floraine Berthouzoz, G. Mysore, Maneesh Agrawala","doi":"10.1145/2807442.2807464","DOIUrl":"https://doi.org/10.1145/2807442.2807464","url":null,"abstract":"Well-performed audio narrations are a hallmark of captivating podcasts, explainer videos, radio stories, and movie trailers. To record these narrations, professional voiceover actors follow guidelines that describe how to use low-level vocal components---volume, pitch, timbre, and tempo---to deliver performances that emphasize important words while maintaining variety, flow and diction. Yet, these techniques are not well-known outside the professional voiceover community, especially among hobbyist producers looking to create their own narrations. We present Narration Coach, an interface that assists novice users in recording scripted narrations. As a user records her narration, our system synchronizes the takes to her script, provides text feedback about how well she is meeting the expert voiceover guidelines, and resynthesizes her recordings to help her hear how she can speak better.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
User Interaction Models for Disambiguation in Programming by Example 实例编程中消歧的用户交互模型
M. Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, B. Zorn, Sumit Gulwani
{"title":"User Interaction Models for Disambiguation in Programming by Example","authors":"M. Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, B. Zorn, Sumit Gulwani","doi":"10.1145/2807442.2807459","DOIUrl":"https://doi.org/10.1145/2807442.2807459","url":null,"abstract":"Programming by Examples (PBE) has the potential to revolutionize end-user programming by enabling end users, most of whom are non-programmers, to create small scripts for automating repetitive tasks. However, examples, though often easy to provide, are an ambiguous specification of the user's intent. Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of the program that was synthesized by the system. We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. One of these models allows the user to effectively navigate between the huge set of programs that are consistent with the examples provided by the user. The other model uses active learning to ask directed example-based questions to the user on the test input data over which the user intends to run the synthesized program. Our user studies show that each of these models significantly reduces the number of errors in the performed task without any difference in completion time. Moreover, both models are perceived as useful, and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115881962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Codo: Fundraising with Conditional Donations Codo:有条件捐赠筹款
J. F. Beltran, Aysha Siddique, A. Abouzeid, Jay Chen
{"title":"Codo: Fundraising with Conditional Donations","authors":"J. F. Beltran, Aysha Siddique, A. Abouzeid, Jay Chen","doi":"10.1145/2807442.2807509","DOIUrl":"https://doi.org/10.1145/2807442.2807509","url":null,"abstract":"Crowdfunding websites like Kickstarter and Indiegogo offer project organizers the ability to market, fund, and build a community around their campaign. While offering support and flexibility for organizers, crowdfunding sites provide very little control to donors. In this paper, we investigate the idea of empowering donors by allowing them to specify conditions for their crowdfunding contributions. We introduce a crowdfunding system, Codo, that allows donors to specify conditional donations. Codo allow donors to contribute to a campaign but hold off on their contribution until certain specific conditions are met (e.g. specific members or groups contribute a certain amount). We begin with a micro study to assess several specific conditional donations based on their comprehensibility and usage likelihood. Based on this study, we formalize conditional donations into a general grammar that captures a broad set of useful conditions. We demonstrate the feasibility of resolving conditions in our grammar by elegantly transforming conditional donations into a system of linear inequalities that are efficiently resolved using off-the-shelf linear program solvers. Finally, we designed a user-friendly crowdfunding interface that supports conditional donations for an actual fund raising campaign and assess the potential of conditional donations through this campaign. We find preliminary evidence that roughly 1 in 3 donors make conditional donations and that conditional donors donate more compared to direct donors.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126607041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Unravel: Rapid Web Application Reverse Engineering via Interaction Recording, Source Tracing, and Library Detection 解开:通过交互记录,源跟踪和库检测的快速Web应用程序逆向工程
Joshua Hibschman, Haoqi Zhang
{"title":"Unravel: Rapid Web Application Reverse Engineering via Interaction Recording, Source Tracing, and Library Detection","authors":"Joshua Hibschman, Haoqi Zhang","doi":"10.1145/2807442.2807468","DOIUrl":"https://doi.org/10.1145/2807442.2807468","url":null,"abstract":"Professional websites with complex UI features provide real world examples for developers to learn from. Yet despite the availability of source code, it is still difficult to understand how these features are implemented. Existing tools such as the Chrome Developer Tools and Firebug offer debugging and inspection, but reverse engineering is still a time consuming task. We thus present Unravel, an extension of the Chrome Developer Tools for quickly tracking and visualizing HTML changes, JavaScript method calls, and JavaScript libraries. Unravel injects an observation agent into websites to monitor DOM interactions in real-time without functional interference or external dependencies. To manage potentially large observations of events, the Unravel UI provides affordances to reduce, sort, and scope observations. Testing Unravel with 13 web developers on 5 large-scale websites, we found a 53% decrease in time to discovering the first key source behind a UI feature and a 32% decrease in time to understanding how to fully recreate a feature.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123406567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze 目光转移:直接间接输入与笔和触摸调节凝视
Ken Pfeuffer, Jason Alexander, M. K. Chong, Yanxia Zhang, Hans-Werner Gellersen
{"title":"Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze","authors":"Ken Pfeuffer, Jason Alexander, M. K. Chong, Yanxia Zhang, Hans-Werner Gellersen","doi":"10.1145/2807442.2807460","DOIUrl":"https://doi.org/10.1145/2807442.2807460","url":null,"abstract":"Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115071699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Extreme Computational Photography 极限计算摄影
R. Raskar
{"title":"Extreme Computational Photography","authors":"R. Raskar","doi":"10.1145/2807442.2814654","DOIUrl":"https://doi.org/10.1145/2807442.2814654","url":null,"abstract":"The Camera Culture Group at the MIT Media Lab aims to create a new class of imaging platforms. This talk will discuss three tracks of research: femto photography, retinal imaging, and 3D displays. Femto Photography consists of femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques allowing researchers to visualize propagation of light. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. Using an indirect 'stroboscopic' method that records millions of repeated measurements by careful scanning in time and viewpoints we can rearrange the data to create a 'movie' of a nanosecond long event. Femto photography and a new generation of nano-photography (using ToF cameras) allow powerful inference with computer vision in presence of scattering. EyeNetra is a mobile phone attachment that allows users to test their own eyesight. The device reveals corrective measures thus bringing vision to billions of people who would not have had access otherwise. Another project, eyeMITRA, is a mobile retinal imaging solution that brings retinal exams to the realm of routine care, by lowering the cost of the imaging device to a 10th of its current cost and integrating the device with image analysis software and predictive analytics. This provides early detection of Diabetic Retinopathy that can change the arc of growth of the world's largest cause of blindness. Finally the talk will describe novel lightfield cameras and lightfield displays that require a compressive optical architecture to deal with high bandwidth requirements of 4D signals","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132129384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Automated Email Tagging with Implicit Feedback 使用隐式反馈改进自动电子邮件标签
Mohammad S. Sorower, Michael Slater, Thomas G. Dietterich
{"title":"Improving Automated Email Tagging with Implicit Feedback","authors":"Mohammad S. Sorower, Michael Slater, Thomas G. Dietterich","doi":"10.1145/2807442.2807501","DOIUrl":"https://doi.org/10.1145/2807442.2807501","url":null,"abstract":"Tagging email is an important tactic for managing information overload. Machine learning methods can help the user with this task by predicting tags for incoming email messages. The natural user interface displays the predicted tags on the email message, and the user doesn't need to do anything unless those predictions are wrong (in which case, the user can delete the incorrect tags and add the missing tags). From a machine learning perspective, this means that the learning algorithm never receives confirmation that its predictions are correct---it only receives feedback when it makes a mistake. This can lead to slower learning, particularly when the predictions were not very confident, and hence, the learning algorithm would benefit from positive feedback. One could assume that if the user never changes any tag, then the predictions are correct, but users sometimes forget to correct the tags, presumably because they are focused on the content of the email messages and fail to notice incorrect and missing tags. The aim of this paper is to determine whether implicit feedback can provide useful additional training examples to the email prediction subsystem of TaskTracer, known as EP2 (Email Predictor 2). Our hypothesis is that the more time a user spends working on an email message, the more likely it is that the user will notice tag errors and correct them. If no corrections are made, then perhaps it is safe for the learning system to treat the predicted tags as being correct and train accordingly. This paper proposes three algorithms (and two baselines) for incorporating implicit feedback into the EP2 tag predictor. These algorithms are then evaluated using email interaction and tag correction events collected from 14 user-study participants as they performed email-directed tasks while using TaskTracer EP2. The results show that implicit feedback produces important increases in training feedback, and hence, significant reductions in subsequent prediction errors despite the fact that the implicit feedback is not perfect. We conclude that implicit feedback mechanisms can provide a useful performance boost for email tagging systems.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"239 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133806685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SensorTape: Modular and Programmable 3D-Aware Dense Sensor Network on a Tape SensorTape:模块化和可编程的3d感知密集传感器网络上的磁带
A. Dementyev, H. Kao, J. Paradiso
{"title":"SensorTape: Modular and Programmable 3D-Aware Dense Sensor Network on a Tape","authors":"A. Dementyev, H. Kao, J. Paradiso","doi":"10.1145/2807442.2807507","DOIUrl":"https://doi.org/10.1145/2807442.2807507","url":null,"abstract":"SensorTape is a modular and dense sensor network in a form factor of a tape. SensorTape is composed of interconnected and programmable sensor nodes on a flexible electronics substrate. Each node can sense its orientation with an inertial measurement unit, allowing deformation self-sensing of the whole tape. Also, nodes sense proximity using time-of-flight infrared. We developed network architecture to automatically determine the location of each sensor node, as SensorTape is cut and rejoined. Also, we made an intuitive graphical interface to program the tape. Our user study suggested that SensorTape enables users with different skill sets to intuitively create and program large sensor network arrays. We developed diverse applications ranging from wearables to home sensing, to show low deployment effort required by the user. We showed how SensorTape could be produced at scale using current technologies and we made a 2.3-meter long prototype.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133942748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities 坦诚互动:揭示隐藏的移动和可穿戴计算活动
Barrett Ens, Tovi Grossman, Fraser Anderson, Justin Matejka, G. Fitzmaurice
{"title":"Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities","authors":"Barrett Ens, Tovi Grossman, Fraser Anderson, Justin Matejka, G. Fitzmaurice","doi":"10.1145/2807442.2807449","DOIUrl":"https://doi.org/10.1145/2807442.2807449","url":null,"abstract":"The growth of mobile and wearable technologies has made it often difficult to understand what people in our surroundings are doing with their technology. In this paper, we introduce the concept of candid interaction: techniques for providing awareness about our mobile and wearable device usage to others in the vicinity. We motivate and ground this exploration through a survey on current attitudes toward device usage during interpersonal encounters. We then explore a design space for candid interaction through seven prototypes that leverage a wide range of technological enhancements, such as Augmented Reality, shape memory muscle wire, and wearable projection. Preliminary user feedback of our prototypes highlights the trade-offs between the benefits of sharing device activity and the need to protect user privacy.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125889037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信