Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

筛选
英文 中文
AttachMate: highlight extraction from email attachments AttachMate:突出从电子邮件附件中提取
J. Hailpern, S. Asur, Kyle Rector
{"title":"AttachMate: highlight extraction from email attachments","authors":"J. Hailpern, S. Asur, Kyle Rector","doi":"10.1145/2642918.2647419","DOIUrl":"https://doi.org/10.1145/2642918.2647419","url":null,"abstract":"While email is a major conduit for information sharing in enterprise, there has been little work on exploring the files sent along with these messages -- attachments. These accompanying documents can be large (multiple megabytes), lengthy (multiple pages), and not optimized for the smaller screen sizes, limited reading time, and expensive bandwidth of mobile users. Thus, attachments can increase data storage costs (for both end users and email servers), drain users' time when irrelevant, cause important information to be missed when ignored, and pose a serious access issue for mobile users. To address these problems we created AttachMate, a novel email attachment summarization system. AttachMate can summarize the content of email attachments and automatically insert the summary into the text of the email. AttachMate also stores all files in the cloud, reducing file storage costs and bandwidth consumption. In this paper, the primary contribution is the AttachMate client/server architecture. To ground, support and validate the AttachMate system we present two upfront studies (813 participants) to understand the state and limitations of attachments, a novel algorithm to extract representative concept sentences (tested through two validation studies), and a user study of AttachMate within an enterprise.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87124268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Air+touch: interweaving touch & in-air gestures 空气+触摸:交织触摸和空中手势
Xiang 'Anthony' Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, S. Hudson
{"title":"Air+touch: interweaving touch & in-air gestures","authors":"Xiang 'Anthony' Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, S. Hudson","doi":"10.1145/2642918.2647392","DOIUrl":"https://doi.org/10.1145/2642918.2647392","url":null,"abstract":"We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86439872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 120
Declarative interaction design for data visualization 用于数据可视化的声明式交互设计
Arvind Satyanarayan, Kanit Wongsuphasawat, Jeffrey Heer
{"title":"Declarative interaction design for data visualization","authors":"Arvind Satyanarayan, Kanit Wongsuphasawat, Jeffrey Heer","doi":"10.1145/2642918.2647360","DOIUrl":"https://doi.org/10.1145/2642918.2647360","url":null,"abstract":"Declarative visualization grammars can accelerate development, facilitate retargeting across platforms, and allow language-level optimizations. However, existing declarative visualization languages are primarily concerned with visual encoding, and rely on imperative event handlers for interactive behaviors. In response, we introduce a model of declarative interaction design for data visualizations. Adopting methods from reactive programming, we model low-level events as composable data streams from which we form higher-level semantic signals. Signals feed predicates and scale inversions, which allow us to generalize interactive selections at the level of item geometry (pixels) into interactive queries over the data domain. Production rules then use these queries to manipulate the visualization's appearance. To facilitate reuse and sharing, these constructs can be encapsulated as named interactors: standalone, purely declarative specifications of interaction techniques. We assess our model's feasibility and expressivity by instantiating it with extensions to the Vega visualization grammar. Through a diverse range of examples, we demonstrate coverage over an established taxonomy of visualization interaction techniques.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76175848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Paper3D: bringing casual 3D modeling to a multi-touch interface Paper3D:将随意的3D建模带入多点触控界面
P. Paczkowski, Julie Dorsey, H. Rushmeier, Min H. Kim
{"title":"Paper3D: bringing casual 3D modeling to a multi-touch interface","authors":"P. Paczkowski, Julie Dorsey, H. Rushmeier, Min H. Kim","doi":"10.1145/2642918.2647416","DOIUrl":"https://doi.org/10.1145/2642918.2647416","url":null,"abstract":"A 3D modeling system that provides all-inclusive functionality is generally too demanding for a casual 3D modeler to learn. In recent years, there has been a shift towards developing more approachable systems, with easy-to-learn, intuitive interfaces. However, most modeling systems still employ mouse and keyboard interfaces, despite the ubiquity of tablet devices, and the benefits of multi-touch interfaces applied to 3D modeling. In this paper, we introduce an alternative 3D modeling paradigm for creating developable surfaces, inspired by traditional papercrafting, and implemented as a system designed from the start for a multi-touch tablet. We demonstrate the process of assembling complex 3D scenes from a collection of simpler models, in turn shaped through operations applied to sheets of virtual paper. The modeling and assembling operations mimic familiar, real-world operations performed on paper, allowing users to quickly learn our system with very little guidance. We outline key design decisions made throughout the development process, based on feedback obtained through collaboration with target users. Finally, we include a range of models created in our system.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85695349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Vibkinesis: notification by direct tap and 'dying message' using vibronic movement controllable smartphones Vibkinesis:通过直接点击通知和使用振动运动可控智能手机的“死亡信息”
Shota Yamanaka, Homei Miyashita
{"title":"Vibkinesis: notification by direct tap and 'dying message' using vibronic movement controllable smartphones","authors":"Shota Yamanaka, Homei Miyashita","doi":"10.1145/2642918.2647365","DOIUrl":"https://doi.org/10.1145/2642918.2647365","url":null,"abstract":"We propose Vibkinesis, a smartphone that can control its angle and directions of movement and rotation. By separately controlling the vibration motors attached to it, the smartphone can move on a table in the direction it chooses. Vibkinesis can inform a user of a message received when the user is away from the smartphone by changing its orientation, e.g., the smartphone has rotated 90° to the left before the user returns to the smartphone. With this capability, Vibkinesis can notify the user of a message even if the battery is discharged. We also extend the sensing area of Vibkinesis by using an omni-directional lens so that the smartphone tracks the surrounding objects. This allows Vibkinesis to tap the user's hand. These novel interactions expand the mobile device's movement area, notification channels, and notification time span.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89616654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Laevo: a temporal desktop interface for integrated knowledge work Laevo:用于集成知识工作的临时桌面界面
S. Jeuris, Steven Houben, J. Bardram
{"title":"Laevo: a temporal desktop interface for integrated knowledge work","authors":"S. Jeuris, Steven Houben, J. Bardram","doi":"10.1145/2642918.2647391","DOIUrl":"https://doi.org/10.1145/2642918.2647391","url":null,"abstract":"Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which doubles as a to-do list, incoming interruptions can be handled. Our field study indicates how highlighting the temporal nature of activities results in lightweight scalable activity management, while making users more aware about their ongoing and planned work.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87699900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Gaze-touch: combining gaze with multi-touch for interaction on the same surface 注视触摸:将注视与多点触摸相结合,在同一表面上进行交互
Ken Pfeuffer, Jason Alexander, M. K. Chong, Hans-Werner Gellersen
{"title":"Gaze-touch: combining gaze with multi-touch for interaction on the same surface","authors":"Ken Pfeuffer, Jason Alexander, M. K. Chong, Hans-Werner Gellersen","doi":"10.1145/2642918.2647397","DOIUrl":"https://doi.org/10.1145/2642918.2647397","url":null,"abstract":"Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of 'gaze selects, touch manipulates'. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76865180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
CommandSpace: modeling the relationships between tasks, descriptions and features CommandSpace:对任务、描述和特性之间的关系进行建模
Eytan Adar, Mira Dontcheva, Gierad Laput
{"title":"CommandSpace: modeling the relationships between tasks, descriptions and features","authors":"Eytan Adar, Mira Dontcheva, Gierad Laput","doi":"10.1145/2642918.2647395","DOIUrl":"https://doi.org/10.1145/2642918.2647395","url":null,"abstract":"Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations, such as identifying likely system commands given natural language queries and identifying user tasks given a trace of user operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop. We build and evaluate several applications enabled by our model showing the power and flexibility of this approach.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73423520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Cheaper by the dozen: group annotation of 3D data 更便宜:3D数据的组注释
A. Boyko, T. Funkhouser
{"title":"Cheaper by the dozen: group annotation of 3D data","authors":"A. Boyko, T. Funkhouser","doi":"10.1145/2642918.2647418","DOIUrl":"https://doi.org/10.1145/2642918.2647418","url":null,"abstract":"This paper proposes a group annotation approach to interactive semantic labeling of data and demonstrates the idea in a system for labeling objects in 3D LiDAR scans of a city. In this approach, the system selects a group of objects, predicts a semantic label for it, and highlights it in an interactive display. In response, the user either confirms the predicted label, provides a different label, or indicates that no single label can be assigned to all objects in the group. This sequence of interactions repeats until a label has been confirmed for every object in the data set. The main advantage of this approach is that it provides faster interactive labeling rates than alternative approaches, especially in cases where all labels must be explicitly confirmed by a person. The main challenge is to provide an algorithm that selects groups with many objects all of the same label type arranged in patterns that are quick to recognize, which requires models for predicting object labels and for estimating times for people to recognize objects in groups. We address these challenges by defining an objective function that models the estimated time required to process all unlabeled objects and approximation algorithms to minimize it. Results of user studies suggest that group annotation can be used to label objects in LiDAR scans of cities significantly faster than one-by-one annotation with active learning.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79732025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A series of tubes: adding interactivity to 3D prints using internal pipes 一系列管道:使用内部管道为3D打印增加交互性
Valkyrie Savage, Ryan M. Schmidt, Tovi Grossman, G. Fitzmaurice, Bjoern Hartmann
{"title":"A series of tubes: adding interactivity to 3D prints using internal pipes","authors":"Valkyrie Savage, Ryan M. Schmidt, Tovi Grossman, G. Fitzmaurice, Bjoern Hartmann","doi":"10.1145/2642918.2647374","DOIUrl":"https://doi.org/10.1145/2642918.2647374","url":null,"abstract":"3D printers offer extraordinary flexibility for prototyping the shape and mechanical function of objects. We investigate how 3D models can be modified to facilitate the creation of interactive objects that offer dynamic input and output. We introduce a general technique for supporting the rapid prototyping of interactivity by removing interior material from 3D models to form internal pipes. We describe this new design space of pipes for interaction design, where variables include openings, path constraints, topologies, and inserted media. We then present PipeDream, a tool for routing such pipes through the interior of 3D models, integrated within a 3D modeling program. We use two distinct routing algorithms. The first has users define pipes' terminals, and uses path routing and physics-based simulation to minimize pipe bending energy, allowing easy insertion of media post-print. The second allows users to supply a desired internal shape to which we fit a pipe route: for this we describe a graph-routing algorithm. We present several prototypes created using our tool to show its flexibility and potential.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91246995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信