Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

筛选
英文 中文
Prefab layers and prefab annotations: extensible pixel-based interpretation of graphical interfaces 预制层和预制注释:可扩展的基于像素的图形界面解释
M. Dixon, A. C. Nied, J. Fogarty
{"title":"Prefab layers and prefab annotations: extensible pixel-based interpretation of graphical interfaces","authors":"M. Dixon, A. C. Nied, J. Fogarty","doi":"10.1145/2642918.2647412","DOIUrl":"https://doi.org/10.1145/2642918.2647412","url":null,"abstract":"Pixel-based methods have the potential to fundamentally change how we build graphical interfaces, but remain difficult to implement. We introduce a new toolkit for pixel based enhancements, focused on two areas of support. Prefab Layers helps developers write interpretation logic that can be composed, reused, and shared to manage the multi-faceted nature of pixel-based interpretation. Prefab Annotations supports robustly annotating interface elements with metadata needed to enable runtime enhancements. Together, these help developers overcome subtle but critical dependencies between code and data. We validate our toolkit with (1) demonstrative applications and (2) a lab study that compares how developers build an enhancement using our toolkit versus state of the art methods. Our toolkit addresses core challenges faced by developers when building pixel based enhancements, potentially opening up pixel based systems to broader adoption.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86421711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
3D-board: a whole-body remote collaborative whiteboard 3D-board:全身远程协作白板
Jakob Zillner, Christoph Rhemann, S. Izadi, M. Haller
{"title":"3D-board: a whole-body remote collaborative whiteboard","authors":"Jakob Zillner, Christoph Rhemann, S. Izadi, M. Haller","doi":"10.1145/2642918.2647393","DOIUrl":"https://doi.org/10.1145/2642918.2647393","url":null,"abstract":"This paper presents 3D-Board, a digital whiteboard capable of capturing life-sized virtual embodiments of geographically distributed users. When using large-scale screens for remote collaboration, awareness for the distributed users' gestures and actions is of particular importance. Our work adds to the literature on remote collaborative workspaces, it facilitates intuitive remote collaboration on large scale interactive whiteboards by preserving awareness of the full-body pose and gestures of the remote collaborator. By blending the front-facing 3D embodiment of a remote collaborator with the shared workspace, an illusion is created as if the observer was looking through the transparent whiteboard into the remote user's room. The system was tested and verified in a usability assessment, showing that 3D-Board significantly improves the effectiveness of remote collaboration on a large interactive surface.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77150495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
FlatFitFab: interactive modeling with planar sections FlatFitFab:平面剖面的交互式建模
James McCrae, Nobuyuki Umetani, Karan Singh
{"title":"FlatFitFab: interactive modeling with planar sections","authors":"James McCrae, Nobuyuki Umetani, Karan Singh","doi":"10.1145/2642918.2647388","DOIUrl":"https://doi.org/10.1145/2642918.2647388","url":null,"abstract":"We present a comprehensive system to author planar section structures, common in art and engineering. A study on how planar section assemblies are imagined and drawn guide our design principles: planar sections are best drawn in-situ, with little foreshortening, orthogonal to intersecting planar sections, exhibiting regularities between planes and contours. We capture these principles with a novel drawing workflow where a single fluid user stroke specifies a 3D plane and its contour in relation to existing planar sections. Regularity is supported by defining a vocabulary of procedural operations for intersecting planar sections. We exploit planar structure properties to provide real-time visual feedback on physically simulated stresses, and geometric verification that the structure is stable, connected and can be assembled. This feedback is validated by real-world fabrication and testing. As evaluation, we report on over 50 subjects who all used our system with minimal instruction to create unique models.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88090797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Video digests: a browsable, skimmable format for informational lecture videos 视频摘要:一个可浏览的,可略读格式的信息讲座视频
Amy Pavel, Colorado Reed, Bjoern Hartmann, Maneesh Agrawala
{"title":"Video digests: a browsable, skimmable format for informational lecture videos","authors":"Amy Pavel, Colorado Reed, Bjoern Hartmann, Maneesh Agrawala","doi":"10.1145/2642918.2647400","DOIUrl":"https://doi.org/10.1145/2642918.2647400","url":null,"abstract":"Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos using current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section structure and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combination of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and automated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86820186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
InterTwine: creating interapplication information scent to support coordinated use of software 交织:创建应用程序间的信息气味,以支持软件的协调使用
Adam Fourney, B. Lafreniere, Parmit K. Chilana, Michael A. Terry
{"title":"InterTwine: creating interapplication information scent to support coordinated use of software","authors":"Adam Fourney, B. Lafreniere, Parmit K. Chilana, Michael A. Terry","doi":"10.1145/2642918.2647420","DOIUrl":"https://doi.org/10.1145/2642918.2647420","url":null,"abstract":"Users often make continued and sustained use of online resources to complement use of a desktop application. For example, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide separate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe InterTwine, a system that links information in the web browser with relevant elements in the desktop application to create interapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding information in both applications. As an example, InterTwine marks all menu items in the desktop application that are currently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85774066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Programming by manipulation for layout 通过布局操作进行编程
Thibaud Hottelier, R. Bodík, Kimiko Ryokai
{"title":"Programming by manipulation for layout","authors":"Thibaud Hottelier, R. Bodík, Kimiko Ryokai","doi":"10.1145/2642918.2647378","DOIUrl":"https://doi.org/10.1145/2642918.2647378","url":null,"abstract":"We present Programming by Manipulation, a new programming methodology for specifying the layout of data visualizations, targeted at non-programmers. We address the two central sources of bugs that arise when programming with constraints: ambiguities and conflicts (inconsistencies). We rule out conflicts by design and exploit ambiguity to explore possible layout designs. Our users design layouts by highlighting undesirable aspects of a current design, effectively breaking spurious constraints and introducing ambiguity by giving some elements freedom to move or resize. Subsequently, the tool indicates how the ambiguity can be removed, by computing how the free elements can be fixed with available constraints. To support this workflow, our tool computes the ambiguity and summarizes it visually. We evaluate our work with two user-studies demonstrating that both non-programmers and programmers can effectively use our prototype. Our results suggest that our tool is 5-times more productive than direct programming with constraints.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77202021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Humane representation of thought: a trail map for the 21st century 思想的人文表现:21世纪的轨迹图
Bret Victor
{"title":"Humane representation of thought: a trail map for the 21st century","authors":"Bret Victor","doi":"10.1145/2642918.2642920","DOIUrl":"https://doi.org/10.1145/2642918.2642920","url":null,"abstract":"New representations of thought -- written language, mathematical notation, information graphics, etc -- have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity's collectively-thinkable territory. But at debilitating cost. These representations, having been invented for static media such as paper, tap into a small subset of human capabilities and neglect the rest. Knowledge work means sitting at a desk, interpreting and manipulating symbols. The human body is reduced to an eye staring at tiny rectangles and fingers on a pen or keyboard. Like any severely unbalanced way of living, this is crippling to mind and body. But less obviously, and more importantly, it is enormously wasteful of the vast human potential. Human beings naturally have many powerful modes of thinking and understanding. Most are incompatible with static media. In a culture that has contorted itself around the limitations of marks on paper, these modes are undeveloped, unrecognized, or scorned. We are now seeing the start of a dynamic medium. To a large extent, people today are using this medium merely to emulate and extend static representations from the era of paper, and to further constrain the ways in which the human body can interact with external representations of thought. But the dynamic medium offers the opportunity to deliberately invent a humane and empowering form of knowledge work. We can design dynamic representations which draw on the entire range of human capabilities -- all senses, all forms of movement, all forms of understanding -- instead of straining a few and atrophying the rest. This talk suggests how each of the human activities in which thought is externalized (conversing, presenting, reading, writing, etc) can be redesigned around such representations.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82811547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
RichReview: blending ink, speech, and gesture to support collaborative document review RichReview:混合墨水、语音和手势来支持协作文档审查
Dongwook Yoon, Nicholas Chen, François Guimbretière, A. Sellen
{"title":"RichReview: blending ink, speech, and gesture to support collaborative document review","authors":"Dongwook Yoon, Nicholas Chen, François Guimbretière, A. Sellen","doi":"10.1145/2642918.2647390","DOIUrl":"https://doi.org/10.1145/2642918.2647390","url":null,"abstract":"This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview's versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76969717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Kitty: sketching dynamic and interactive illustrations 凯蒂:素描动态和互动的插图
Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, G. Fitzmaurice
{"title":"Kitty: sketching dynamic and interactive illustrations","authors":"Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2642918.2647375","DOIUrl":"https://doi.org/10.1145/2642918.2647375","url":null,"abstract":"We present Kitty, a sketch-based tool for authoring dynamic and interactive illustrations. Artists can sketch animated drawings and textures to convey the living phenomena, and specify the functional relationship between its entities to characterize the dynamic behavior of systems and environments. An underlying graph model, customizable through sketching, captures the functional relationships between the visual, spatial, temporal or quantitative parameters of its entities. As the viewer interacts with the resulting dynamic interactive illustration, the parameters of the drawing change accordingly, depicting the dynamics and chain of causal effects within a scene. The generality of this framework makes our tool applicable for a variety of purposes, including technical illustrations, scientific explanation, infographics, medical illustrations, children's e-books, cartoon strips and beyond. A user study demonstrates the ease of usage, variety of applications, artistic expressiveness and creative possibilities of our tool.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80483542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Loupe: a handheld near-eye display Loupe:手持式近眼显示器
Kent Lyons, S. Kim, Shigeyuki Seko, David H. Nguyen, Audrey Desjardins, Mélodie Vidal, D. Dobbelstein, Jeremy Rubin
{"title":"Loupe: a handheld near-eye display","authors":"Kent Lyons, S. Kim, Shigeyuki Seko, David H. Nguyen, Audrey Desjardins, Mélodie Vidal, D. Dobbelstein, Jeremy Rubin","doi":"10.1145/2642918.2647361","DOIUrl":"https://doi.org/10.1145/2642918.2647361","url":null,"abstract":"Loupe is a novel interactive device with a near-eye virtual display similar to head-up display glasses that retains a handheld form factor. We present our hardware implementation and discuss our user interface that leverages Loupe's unique combination of properties. In particular, we present our input capabilities, spatial metaphor, opportunities for using the round aspect of Loupe, and our use of focal depth. We demonstrate how those capabilities come together in an example application designed to allow quick access to information feeds.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77751143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信