Proceedings. Graphics Interface (Conference)最新文献

筛选
英文 中文
Session details: Session G2: Geometry & Style 课程详情:课程G2:几何与风格
Proceedings. Graphics Interface (Conference) Pub Date : 2018-06-01 DOI: 10.5555/3374362.3374425
Alec Jacobson
{"title":"Session details: Session G2: Geometry & Style","authors":"Alec Jacobson","doi":"10.5555/3374362.3374425","DOIUrl":"https://doi.org/10.5555/3374362.3374425","url":null,"abstract":"","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47954305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Characteristics of a Camera-Based Tangible Input Device for Manipulation of 3D Information 一种基于摄像头的三维信息操作有形输入设备的性能特点
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.10
Zeyuan Chen, C. Healey, R. Amant
{"title":"Performance Characteristics of a Camera-Based Tangible Input Device for Manipulation of 3D Information","authors":"Zeyuan Chen, C. Healey, R. Amant","doi":"10.20380/GI2017.10","DOIUrl":"https://doi.org/10.20380/GI2017.10","url":null,"abstract":"This paper describes a prototype tangible six degree of freedom (6 DoF) input device that is inexpensive and intuitive to use: a cube with colored corners of specific shapes, tracked by a single camera, with pose estimated in real time. A tracking and automatic color adjustment system are designed so that the device can work robustly with noisy surroundings and is invariant to changes in lighting and background noise. A system evaluation shows good performance for both refresh (above 60 FPS on average) and accuracy of pose estimation (average angular error of about 1). A user study of 3D rotation tasks shows that the device outperforms other 6 DoF input devices used in a similar desktop environment. The device has the potential to facilitate interactive applications such as games as well as viewing 3D information.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"74-81"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45540781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Collaborative 3D Modeling by the Crowd 协同3D建模的人群
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.16
Ryohei Suzuki, T. Igarashi
{"title":"Collaborative 3D Modeling by the Crowd","authors":"Ryohei Suzuki, T. Igarashi","doi":"10.20380/GI2017.16","DOIUrl":"https://doi.org/10.20380/GI2017.16","url":null,"abstract":"We propose a collaborative 3D modeling system that deconstructs the complex 3D modeling process into a collection of simple tasks to be executed by nonprofessional crowd workers. Given a 2D image showing a target object, each crowd worker is directed to draw a simple sketch representing an orthographic view of the object, using their visual cognition and real-world knowledge. The system then synthesizes a 3D model by integrating the geometrical information obtained from a collection of gathered sketches. We show a set of algorithms that generates clean line drawings and a 3D model from a collection of incomplete sketches containing a considerable amount of errors and inconsistencies. We also discuss a crowdsourcing workflow that iteratively improves the quality of submitted sketches. It introduces competition between workers using extra rewards based on peer-reviewing as well as an example-sharing mechanism to help workers understand the task requirements and quality standards. The proposed system can produce decent-quality 3D geometries of various objects within a few hours.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"124-131"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48873747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Euclidean Distance Transform Shadow Mapping 欧氏距离变换阴影映射
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.22
Márcio C. F. Macedo, A. Apolinario
{"title":"Euclidean Distance Transform Shadow Mapping","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.20380/GI2017.22","DOIUrl":"https://doi.org/10.20380/GI2017.22","url":null,"abstract":"The high-quality simulation of the penumbra effect in real-time shadows is a challenging problem in shadow mapping. The existing shadow map filtering techniques are prone to aliasing and light leaking artifacts which decrease the shadow visual quality. In this paper, we aim to minimize both problems with the Euclidean distance transform shadow mapping. To reduce the perspective aliasing artifacts generated by shadow mapping, we revectorize the hard shadow boundaries using the revectorization-based shadow mapping. Then, an exact normalized Euclidean distance transform is computed in the user-defined penumbra region to simulate the penumbra effect. Finally, a mean filter is applied to further suppress skeleton artifacts generated by the distance transform. The results obtained show that our technique runs entirely on the GPU, produces less artifacts than related work, and provides real-time performance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"171-180"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43230523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Raising the Bars: Evaluating Treemaps vs. Wrapped Bars for Dense Visualization of Sorted Numeric Data 提升条形图:评估树状图与包裹条形图,以实现排序数字数据的密集可视化
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.06
M. A. Yalçın, N. Elmqvist, B. Bederson
{"title":"Raising the Bars: Evaluating Treemaps vs. Wrapped Bars for Dense Visualization of Sorted Numeric Data","authors":"M. A. Yalçın, N. Elmqvist, B. Bederson","doi":"10.20380/GI2017.06","DOIUrl":"https://doi.org/10.20380/GI2017.06","url":null,"abstract":"A standard (single-column) bar chart can effectively visualize a sorted list of numeric records. However, the chart height limits the number of visible records. To show more records, the bars could be made thinner (which could hinder identifying records individually), and scrolling requires interaction to see the overview. Treemaps have been used in practice in non-hierarchical settings for dense visualization of numeric data. Alternatively, we consider wrapped bars, a multi-column bar chart that uses length instead of area to encode numeric values. We compare treemaps and wrapped bars based on their design characteristics, and graphical perception performance for comparison, ranking, and overview tasks using crowdsourced experiments. Our analysis found that wrapped bars perceptually outperform treemaps in all three tasks for dense visualization of non-hierarchical, sorted numeric data.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"41-49"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47187665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Supporting Team-First Visual Analytics through Group Activity Representations 通过小组活动表示支持团队优先的可视化分析
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.26
Sriram Karthik Badam, Zehua Zeng, Emily Wall, A. Endert, N. Elmqvist
{"title":"Supporting Team-First Visual Analytics through Group Activity Representations","authors":"Sriram Karthik Badam, Zehua Zeng, Emily Wall, A. Endert, N. Elmqvist","doi":"10.20380/GI2017.26","DOIUrl":"https://doi.org/10.20380/GI2017.26","url":null,"abstract":"Collaborative visual analytics (CVA) involves sensemaking activities within teams of analysts based on coordination of work across team members, awareness of team activity, and communication of hypotheses, observations, and insights. We introduce a new type of CVA tools based on the notion of “team-first” visual analytics, where supporting the analytical process and needs of the entire team is the primary focus of the graphical user interface before that of the individual analysts. To this end, we present the design space and guidelines for team-first tools in terms of conveying analyst presence, focus, and activity within the interface. We then introduce InsightsDrive, a CVA tool for multidimensional data, that contains team-first features into the interface through group activity visualizations. This includes (1) in-situ representations that show the focus regions of all users integrated in the data visualizations themselves using color-coded selection shadows, as well as (2) ex-situ representations showing the data coverage of each analyst using multidimensional visual representations. We conducted two user studies, one with individual analysts to identify the affordances of different visual representations to inform data coverage, and the other to evaluate the performance of our team-first design with exsitu and in-situ awareness for visual analytic tasks. Our results give an understanding of the performance of our team-first features and unravel their advantages for team coordination.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"208-213"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45054117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments Ivy:探索基于空间的可视化程序设计来编写和理解智能环境
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.20
Barrett Ens, Fraser Anderson, Tovi Grossman, M. Annett, Pourang Irani, G. Fitzmaurice
{"title":"Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments","authors":"Barrett Ens, Fraser Anderson, Tovi Grossman, M. Annett, Pourang Irani, G. Fitzmaurice","doi":"10.20380/GI2017.20","DOIUrl":"https://doi.org/10.20380/GI2017.20","url":null,"abstract":"The availability of embedded, digital systems has led to a multitude of interconnected sensors and actuators being distributed among smart objects and built environments. Programming and understanding the behaviors of such systems can be challenging given their inherent spatial nature. To explore how spatial and contextual information can facilitate the authoring of intelligent environments, we introduce Ivy, a spatially situated visual programming tool using immersive virtual reality. Ivy allows users to link smart objects, insert logic constructs, and visualize real-time data flows between real-world sensors and actuators. Initial feedback sessions show that participants of varying skill levels can successfully author and debug programs in example scenarios.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"156-162"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42336514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Real-time Rendering with Compressed Animated Light Fields 实时渲染与压缩动画光场
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.05
Babis Koniaris, Maggie Kosek, David Sinclair, Kenny Mitchell
{"title":"Real-time Rendering with Compressed Animated Light Fields","authors":"Babis Koniaris, Maggie Kosek, David Sinclair, Kenny Mitchell","doi":"10.20380/GI2017.05","DOIUrl":"https://doi.org/10.20380/GI2017.05","url":null,"abstract":"We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"33-40"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43429659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
De-Identified Feature-based Visualization of Facial Expression for Enhanced Text Chat 基于去识别特征的面部表情可视化增强文本聊天
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.25
Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang
{"title":"De-Identified Feature-based Visualization of Facial Expression for Enhanced Text Chat","authors":"Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang","doi":"10.20380/GI2017.25","DOIUrl":"https://doi.org/10.20380/GI2017.25","url":null,"abstract":"The lack of visibility in text-based chat can hinder communication, especially when nonverbal cues are instrumental to the production and understanding of messages. However, communicating rich nonverbal cues such as facial expressions may be technologically more costly (e.g., demand of bandwidth for video streaming) and socially less desirable (e.g., disclosing other personal and context information through video). We consider how to balance the tension by supporting people to convey facial expressions without compromising the benefits of invisibility in text communication. We present KinChat, an enhanced text chat tool that integrates motion sensing and 2D graphical visualization as a technique to convey information of key facial features during text conversations. We conducted two studies to examine how KinChat influences the de-identification and awareness of facial cues in comparison to other techniques using raw and blurring-processed videos, as well as its impact on real-time text chat. We show that feature-based visualization of facial expression can preserve both awareness of facial cues and non-identifiability at the same time, leading to better understanding and reduced anxiety.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"199-207"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45822351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Animating Multiple Escape Maneuvers for a School of Fish 一群鱼的多重逃生动作动画
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.18
Sahithi Podila, Ying Zhu
{"title":"Animating Multiple Escape Maneuvers for a School of Fish","authors":"Sahithi Podila, Ying Zhu","doi":"10.20380/GI2017.18","DOIUrl":"https://doi.org/10.20380/GI2017.18","url":null,"abstract":"A school of fish exhibit a variety of distinctive maneuvers to escape from predators. For example, they adopt avoid, compact, and inspection maneuvers when predators are nearby, use skitter or fast avoid maneuvers when predators chase them, or exhibit fountain, split, and flash maneuvers when predators attack them. Although these escape maneuvers have long been studied in biology and ecology, they have not been sufficiently modeled in computer graphics. Previous works on fish animation only provided simple escape behavior, lacking variety. The classic boids models do not include escape behavior. In this paper, we propose a behavioral model to simulate a variety of fish escape behavior in reaction to a single predator. Based on biological studies, our model can simulate common escape maneuvers such as compact, inspection, avoid, fountain, and flash. We demonstrate our results with simulations of predator attacks.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"140-147"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42291499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信