Proceedings of the working conference on Advanced visual interfaces最新文献

筛选
英文 中文
Identification and validation of cognitive design principles for automated generation of assembly instructions 自动生成装配指令的认知设计原则的识别和验证
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989917
Julie Heiser, Doantam Phan, Maneesh Agrawala, B. Tversky, P. Hanrahan
{"title":"Identification and validation of cognitive design principles for automated generation of assembly instructions","authors":"Julie Heiser, Doantam Phan, Maneesh Agrawala, B. Tversky, P. Hanrahan","doi":"10.1145/989863.989917","DOIUrl":"https://doi.org/10.1145/989863.989917","url":null,"abstract":"Designing effective instructions for everyday products is challenging. One reason is that designers lack a set of design principles for producing visually comprehensible and accessible instructions. We describe an approach for identifying such design principles through experiments investigating the production, preference, and comprehension of assembly instructions for furniture. We instantiate these principles into an algorithm that automatically generates assembly instructions. Finally, we perform a user study comparing our computer-generated instructions to factory-provided and highly rated hand-designed instructions. Our results indicate that the computer-generated instructions informed by our cognitive design principles significantly reduce assembly time an average of 35% and error by 50%. Details of the experimental methodology and the implementation of the automated system are described.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129067876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
DeepDocument: use of a multi-layered display to provide context awareness in text editing DeepDocument:使用多层显示在文本编辑中提供上下文感知
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989902
M. Masoodian, Sam McKoy, Bill Rogers, David Ware
{"title":"DeepDocument: use of a multi-layered display to provide context awareness in text editing","authors":"M. Masoodian, Sam McKoy, Bill Rogers, David Ware","doi":"10.1145/989863.989902","DOIUrl":"https://doi.org/10.1145/989863.989902","url":null,"abstract":"Word Processing software usually only displays paragraphs of text immediately adjacent to the cursor position. Generally this is appropriate, for example when composing a single paragraph. However, when reviewing or working on the layout of a document it is necessary to establish awareness of current text in the context of the document as a whole. This can be done by scrolling or zooming, but when doing so, focus is easily lost and hard to regain.We have developed a system called DeepDocument using a two-layered LCD display in which both focussed and document-wide views are presented simultaneously. The overview is shown on the rear display and the focussed view on the front, maintaining full screen size for each. The physical separation of the layers takes advantage of human depth perception capabilities to allow users to perceive the views independently without having to redirect their gaze. DeepDocument has been written as an extension to Microsoft Word™.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
ValueCharts: analyzing linear models expressing preferences and evaluations ValueCharts:分析表示偏好和评估的线性模型
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989885
G. Carenini, J. Loyd
{"title":"ValueCharts: analyzing linear models expressing preferences and evaluations","authors":"G. Carenini, J. Loyd","doi":"10.1145/989863.989885","DOIUrl":"https://doi.org/10.1145/989863.989885","url":null,"abstract":"In this paper we propose ValueCharts, a set of visualizations and interactive techniques intended to support decision-makers in inspecting linear models of preferences and evaluation. Linear models are popular decision-making tools for individuals, groups and organizations. In Decision Analysis, they help the decision-maker analyze preferential choices under conflicting objectives. In Economics and the Social Sciences, similar models are devised to rank entities according to an evaluative index of interest. The fundamental goal of building models expressing preferences and evaluations is to help the decision-maker organize all the information relevant to a decision into a structure that can be effectively analyzed. However, as models and their domain of application grow in complexity, model analysis can become a very challenging task. We claim that ValueCharts will make the inspection and application of these models more natural and effective. We support our claim by showing how ValueCharts effectively enable a set of basic tasks that we argue are at the core of analyzing and understanding linear models of preferences and evaluation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115417885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Focus dependent multi-level graph clustering 依赖焦点的多层次图聚类
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989888
François Boutin, Mountaz Hascoët
{"title":"Focus dependent multi-level graph clustering","authors":"François Boutin, Mountaz Hascoët","doi":"10.1145/989863.989888","DOIUrl":"https://doi.org/10.1145/989863.989888","url":null,"abstract":"In this paper we propose a structure-based clustering technique that transforms a given graph into a specific double tree structure called multi-level outline tree. Each meta-node of the tree - that represents a subset of nodes - is itself hierarchically clustered. So, a meta-node is considered as a tree root of included clusters.The main originality of our approach is to account for the user focus in the clustering process to provide views from different perspectives. Multi-level outline trees are computed in linear time and easy to explore. We think that our technique is well suited to investigate various graphs like Web graphs or citation graphs.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126870773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Aligning information browsing and exploration methods with a spatial navigation aid for mobile city visitors 将信息浏览和探索方法与移动城市游客的空间导航辅助相结合
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989900
T. Rist, Stephan Baldes, Patrick Brandmeier
{"title":"Aligning information browsing and exploration methods with a spatial navigation aid for mobile city visitors","authors":"T. Rist, Stephan Baldes, Patrick Brandmeier","doi":"10.1145/989863.989900","DOIUrl":"https://doi.org/10.1145/989863.989900","url":null,"abstract":"Navigation support concerning both physical space as well as information spaces address fundamental information needs of mobile users in many application scenarios including the classical shopping visit in the town centre. Therefore it is a particular research objective in the mobile domain to explore, showcase, and test the interplay of physical navigation with navigation in an information space that, metaphorically speaking, superimposes the physical space. We have developed a demonstrator that couples a spatial navigation aid in the form of a 2D interactive map viewer with other information services, such as an interactive web directory service that provides information about shops and restaurants and their product palettes. The research has raised a number of interesting questions, such as of how to align interactions performed in the navigation aid with meaningful actions in a coupled twin application, and vice versa, how to reflect navigation in an information space in the aligned spatial navigation aid.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image presentation in space and time: errors, preferences and eye-gaze activity 空间和时间中的图像呈现:错误、偏好和眼球注视活动
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989884
R. Spence, M. Witkowski, Catherine Fawcett, B. Craft, O. Bruijn
{"title":"Image presentation in space and time: errors, preferences and eye-gaze activity","authors":"R. Spence, M. Witkowski, Catherine Fawcett, B. Craft, O. Bruijn","doi":"10.1145/989863.989884","DOIUrl":"https://doi.org/10.1145/989863.989884","url":null,"abstract":"Rapid Serial Visual Presentation (RSVP) is a technique that allows images to be presented sequentially in the time-domain, thereby offering an alternative to the conventional concurrent display of images in the space domain. Such an alternative offers potential advantages where display area is at a premium. However, notwithstanding the flexibility to employ either or both domains for presentation purposes, little is known about the alternatives suited to specific tasks undertaken by a user. As a consequence there is a pressing need to provide guidance for the interaction designer faced with these alternatives.We investigated the task of identifying the presence or absence of a previously viewed image within a collection of images, a requirement of many real activities. In experiments with subjects, the collection of images was presented in three modes (1) 'slide show' RSVP mode; (2) concurrently and statically -- 'static mode'; and (3) a 'mixed' mode. Each mode employed the same display area and the same total presentation time, together regarded as primary resources available to the interaction designer. For each presentation mode, the outcome identified error profiles and subject preferences. Eye-gaze studies detected distinctive differences between the three presentation modes.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Integrating expanding annotations with a 3D explosion probe 集成扩展注释与3D爆炸探针
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989871
Henry Sonnet, Sheelagh Carpendale, T. Strothotte
{"title":"Integrating expanding annotations with a 3D explosion probe","authors":"Henry Sonnet, Sheelagh Carpendale, T. Strothotte","doi":"10.1145/989863.989871","DOIUrl":"https://doi.org/10.1145/989863.989871","url":null,"abstract":"Understanding complex 3D virtual models can be difficult, especially when the model has interior components not initially visible and ancillary text. We describe new techniques for the interactive exploration of 3D models. Specifically, in addition to traditional viewing operations, we present new text extrusion techniques combined with techniques that create an interactive explosion diagram. In our approach, scrollable text annotations that are associated with the various parts of the model can be revealed dynamically, either in part or in full, by moving the mouse cursor within annotation trigger areas. Strong visual connections between model parts and the associated text are included in order to aid comprehension. Furthermore, the model parts can be separated, creating interactive explosion diagrams. Using a 3D probe, occluding objects can be interactively moved apart and then returned to their initial locations. Displayed annotations are kept readable despite model manipulations. Hence, our techniques provide textual context within the spatial context of the 3D model.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134254483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Task-sensitive user interfaces: grounding information provision within the context of the user's activity 任务敏感用户界面:在用户活动的上下文中提供基础信息
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989899
N. Colineau, Andrew Lampert, Cécile Paris
{"title":"Task-sensitive user interfaces: grounding information provision within the context of the user's activity","authors":"N. Colineau, Andrew Lampert, Cécile Paris","doi":"10.1145/989863.989899","DOIUrl":"https://doi.org/10.1145/989863.989899","url":null,"abstract":"In the context of innovative Airborne Early Warning and Control (AEW&C) platform capabilities, we are building an environment that can support the generation of information tailored to operators' tasks. The challenging issues here are to improve the methods for managing information delivery to the operators, and thus provide them with high-value information on their display whilst avoiding noise and clutter. To this end, we enhance the operator's graphical interface with information delivery mechanisms that support maintenance of situation awareness and improving efficiency. We do this by proactively delivering task-relevant information.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view 渔网,一个带有搜索词弹出的鱼眼网络浏览器:概览和线性视图的比较评价
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989883
Patrick Baudisch, Bongshin Lee, Libby Hanna
{"title":"Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view","authors":"Patrick Baudisch, Bongshin Lee, Libby Hanna","doi":"10.1145/989863.989883","DOIUrl":"https://doi.org/10.1145/989863.989883","url":null,"abstract":"Fishnet is a web browser that always displays web pages in their entirety, independent of their size. Fishnet accomplishes this by using a fisheye view, i.e. by showing a focus region at readable scale while spatially compressing page content above and below that region. Fishnet offers search term highlighting, and assures that those terms are readable by using \"popouts\". This allows users to visually scan search results within the entire page without scrolling.The scope of this paper is twofold. First, we present fishnet as a novel way of viewing the results of highlighted search and we discuss the design space. Second, we present a user study that helps practitioners determine which visualization technique--- fisheye view, overview, or regular linear view---to pick for which type of visual search scenario.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121017832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
A graph-based interface to complex hypermedia structure visualization 复杂超媒体结构可视化的基于图形的界面
Proceedings of the working conference on Advanced visual interfaces Pub Date : 2004-05-25 DOI: 10.1145/989863.989887
Manuel Freire, P. Rodríguez
{"title":"A graph-based interface to complex hypermedia structure visualization","authors":"Manuel Freire, P. Rodríguez","doi":"10.1145/989863.989887","DOIUrl":"https://doi.org/10.1145/989863.989887","url":null,"abstract":"Complex hypermedia structures can be difficult to author and maintain, especially when the usual hierarchic representation cannot capture important relations. We propose a graph-based direct manipulation interface that uses multiple focus+context techniques to avoid display clutter and information overload. A semantical fisheye lens based on hierarchical clustering allows the user to work on high-level abstracts of the structure. Navigation through the resulting graph is animated in order to avoid loss of orientation, with a force-directed algorithm in charge of generating successive layouts. Multiple views can be generated over the same data, each with independent settings for filtering, clustering and degree of zoom.While these techniques are all well-known in the literature, it is their combination and application to the field of hypermedia authoring that constitutes a powerful tool for the development of next-generation hyperspaces.A generic framework, CLOVER, and two specific applications for existing hypermedia systems have been implemented.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"24 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121216904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信