Proceedings. Graphics Interface (Conference)最新文献

筛选
英文 中文
Testing the Limits of the Spatial Approach: Comparing Retrieval and Revisitation Performance of Spatial and Paged Data Organizations for Large Item Sets 测试空间方法的极限:比较大项目集的空间和页面数据组织的检索和重访性能
Proceedings. Graphics Interface (Conference) Pub Date : 2020-01-01 DOI: 10.20380/GI2020.22
C. Gutwin, M. Kamp, J. Storring, A. Cockburn, Cody J. Phillips
{"title":"Testing the Limits of the Spatial Approach: Comparing Retrieval and Revisitation Performance of Spatial and Paged Data Organizations for Large Item Sets","authors":"C. Gutwin, M. Kamp, J. Storring, A. Cockburn, Cody J. Phillips","doi":"10.20380/GI2020.22","DOIUrl":"https://doi.org/10.20380/GI2020.22","url":null,"abstract":"Finding and revisiting objects in visual content collections is common in many analytics tasks. For large collections, filters are often used to reduce the number of items shown, but many systems generate a new ordering of the items for every filter update – and these changes make it difficult for users to remember the locations of important items. An alternative is to show the entire dataset in a spatially-stable layout, and show filter results with highlighting. The spatial approach has been shown to work well with small datasets, but little is known about how spatial memory scales to tasks with hundreds of items. To investigate the scalability of spatial presentations, we carried out a study comparing finding and re-finding performance with two data organizations: pages of items that re-generate item ordering with each filter change, and a spatially-stable organization that presents all 700 items at once. We found that although overall times were similar, the spatial interface was faster for revisitation, and participants used fewer filters than in the paged interface as they gained familiarity with the data. Our results add to previous work by showing that spatial interfaces can work well with datasets of hundreds of items, and that they better support a transition to fast revisitation using spatial memory.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"33 1","pages":"215-224"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87924975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A conversation with CHCCS 2020 achievement award winner Dinesh K. Pai 与CHCCS 2020成就奖得主Dinesh K. Pai的对话
Proceedings. Graphics Interface (Conference) Pub Date : 2020-01-01 DOI: 10.20380/GI2020.02
D. Pai
{"title":"A conversation with CHCCS 2020 achievement award winner Dinesh K. Pai","authors":"D. Pai","doi":"10.20380/GI2020.02","DOIUrl":"https://doi.org/10.20380/GI2020.02","url":null,"abstract":"The 2020 CHCCS Achievement Award from the Canadian Human-Computer Communications Society is presented to Prof. Dinesh Pai (UBC) for his numerous high-impact contributions to the field of computer graphics research. His diverse research addresses physics-based animation, multisensory displays including haptics and sound, and realistic digital human models. CHCCS invites a publication by the award winner to be included in the proceedings, and this year we continue the tradition of an interview format rather than a formal paper. This permits a casual discussion of the research areas, insights, and contributions of the award winner. What follows is an edited transcript of a conversation between Dinesh Pai and Doug James (Stanford CS Prof and former PhD student) that took place on 14 March, 2020, via Zoom.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"20 1","pages":"3-6"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81417199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint-Based Spectral Space Template Deformation for Ear Scans 基于约束的谱空间模板耳扫描变形
Proceedings. Graphics Interface (Conference) Pub Date : 2020-01-01 DOI: 10.20380/GI2020.37
Srinivasan Ramachandran, T. Popa, Eric Paquette
{"title":"Constraint-Based Spectral Space Template Deformation for Ear Scans","authors":"Srinivasan Ramachandran, T. Popa, Eric Paquette","doi":"10.20380/GI2020.37","DOIUrl":"https://doi.org/10.20380/GI2020.37","url":null,"abstract":"Ears are complicated shapes and contain a lot of folds. It is difficult to correctly deform an ear template to achieve the same shape as a scan, while avoiding the reconstruction of noise from the scan and being robust to bad geometry found in the scan. We leverage the smoothness of the spectral space to help in the alignment of the semantic features of the ears. Edges detected in image space are used to identify relevant features from the ear that we align in the spectral representation by iteratively deforming the template ear. We then apply a novel reconstruction that preserves the deformation from the spectral space while reintroducing the original details. A final deformation based on constraints considering surface position and orientation deforms the template ear to match the shape of the scan. We tested our approach on many ear scans and observed that the resulting template shape provides a good compromise between complying with the shape of the scan and avoiding the reconstruction of the noise found in the scan. Furthermore, our approach was robust enough to scan meshes exhibiting typical bad geometry such as cracks and handles.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"130 1","pages":"374-381"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77855273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Yarn: Adding Meaning to Shared Personal Data through Structured Storytelling 纱线:通过结构化的故事叙述为共享的个人数据添加意义
Proceedings. Graphics Interface (Conference) Pub Date : 2020-01-01 DOI: 10.20380/GI2020.18
Daniel A. Epstein, Mira Dontcheva, J. Fogarty, Sean A Munson
{"title":"Yarn: Adding Meaning to Shared Personal Data through Structured Storytelling","authors":"Daniel A. Epstein, Mira Dontcheva, J. Fogarty, Sean A Munson","doi":"10.20380/GI2020.18","DOIUrl":"https://doi.org/10.20380/GI2020.18","url":null,"abstract":"People often do not receive the reactions they desire when they use social networking sites to share data collected through personal tracking tools like Fitbit, Strava, and Swarm. Although some people have found success sharing with close connections or in finding online communities, most audiences express limited interest and rarely respond. We report on findings from a human-centered design process undertaken to examine how tracking tools can better support people in telling their story using their data. 23 formative interviews contribute design goals for telling stories of accomplishment, including a need to include relevant data. We implement these goals in Yarn, a mobile app that offers structure for telling stories of accomplishment around training for running races and completing Do-It-Yourself projects. 21 participants used Yarn for 4 weeks across two studies. Although Yarn’s structure led some participants to include more data or explanation in the moments they created, many felt like the structure prevented them from telling their stories in the way they desired. In light of participant use, we discuss additional challenges to using personal data to inform and target an interested audience.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"91 1","pages":"168-182"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79438949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Image Abstraction through Overlapping Region Growth 基于重叠区域增长的图像提取
Proceedings. Graphics Interface (Conference) Pub Date : 2020-01-01 DOI: 10.20380/GI2020.08
Rosa Azami, D. Mould
{"title":"Image Abstraction through Overlapping Region Growth","authors":"Rosa Azami, D. Mould","doi":"10.20380/GI2020.08","DOIUrl":"https://doi.org/10.20380/GI2020.08","url":null,"abstract":"We propose a region-based abstraction of a photograph, where the image plane is covered by overlapping irregularly shaped regions that approximate the image content. We segment regions using a novel region growth algorithm intended to produce highly irregular regions that still respect image edges, different from conventional segmentation methods that encourage compact regions. The final result has reduced detail, befitting abstraction, but still contains some small structures such as highlights; thin features and crooked boundaries are retained, while interior details are softened, yielding a painting-like abstraction effect.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"84-85 1","pages":"66-73"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89479404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selection Performance Using a Scaled Virtual Stylus Cursor in VR 在VR中使用缩放虚拟触控笔光标的选择性能
Proceedings. Graphics Interface (Conference) Pub Date : 2020-01-01 DOI: 10.20380/GI2020.16
Seyed Amir Ahmad Didehkhorshid, Robert J. Teather
{"title":"Selection Performance Using a Scaled Virtual Stylus Cursor in VR","authors":"Seyed Amir Ahmad Didehkhorshid, Robert J. Teather","doi":"10.20380/GI2020.16","DOIUrl":"https://doi.org/10.20380/GI2020.16","url":null,"abstract":"We propose a surface warping technique we call warped virtual surfaces (WVS). WVS is similar to applying CD gain to mouse cursor on a screen and is used with traditionally 1:1 input devices, in our case, a tablet and stylus, for use with VR head-mounted displays (HMDs). WVS allows users to interact with arbitrarily large virtual panels in VR while getting the benefits of passive haptic feedback from a fixed-sized physical panel. To determine the extent to which WVS affects user performance, we conducted an experiment with 24 participants using a Fitts' law reciprocal tapping task to compare different scale factors. Results indicate there was a significant difference in movement time for large scale factors. However, for throughput (ranging from 3.35 - 3.47 bps) and error rate (ranging from 3.6 - 5.4%), our analysis did not find a significant difference between scale factors. Using non-inferiority statistical testing (a form of equivalence testing), we show that performance in terms of throughput and error rate for large scale factors is no worse than a 1-to-1 mapping. Our results suggest WVS is a promising way of providing large tactile surfaces in VR, using small physical surfaces, and with little impact on user performance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"10 1","pages":"148-157"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83292002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
UniNet: A Mixed Reality Driving Simulator UniNet:混合现实驾驶模拟器
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.06
David F. Arppe, Loutfouz Zaman, R. Pazzi, Khalil El-Khatib
{"title":"UniNet: A Mixed Reality Driving Simulator","authors":"David F. Arppe, Loutfouz Zaman, R. Pazzi, Khalil El-Khatib","doi":"10.20380/GI2020.06","DOIUrl":"https://doi.org/10.20380/GI2020.06","url":null,"abstract":"Driving simulators play an important role in vehicle research. However, existing virtual reality simulators do not give users a true sense of presence. UniNet is our driving simulator, designed to allow users to interact with and visualize simulated traffic in mixed reality. It is powered by SUMO and Unity. UniNet’s modular architecture allows us to investigate interdisciplinary research topics such as vehicular ad-hoc networks, human-computer interaction, and traffic management. We accomplish this by giving users the ability to observe and interact with simulated traffic in a high fidelity driving simulator. We present a user study that subjectively measures user’s sense of presence in UniNet. Our findings suggest that our novel mixed reality system does increase this sensation. Author","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"37-55"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42728939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
We're Here to Help: Company Image Repair and User Perception of Data Breaches 我们在这里帮助:公司形象修复和用户对数据泄露的看法
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.24
Zahra Hassanzadeh, Sky Marsen, R. Biddle
{"title":"We're Here to Help: Company Image Repair and User Perception of Data Breaches","authors":"Zahra Hassanzadeh, Sky Marsen, R. Biddle","doi":"10.20380/GI2020.24","DOIUrl":"https://doi.org/10.20380/GI2020.24","url":null,"abstract":"Data breaches involve information being accessed by unauthorized parties. Our research concerns user perception of data breaches, especially issues relating to accountability. A preliminary study indicated many people had weak understanding of the issues, and felt they themselves were somehow responsible. We speculated that this impression might stem from organizational communication strategies. We therefore compared texts from organizations with those from external sources, such as the news media. This suggested that organizations use well-known crisis communication methods to reduce their reputational damage, and that these strategies align with repositioning of the narrative elements involved in the story. We then conducted a quantitative study, asking participants to rate either organizational texts or news texts about breaches. The findings of this study were in line with our document analysis, and suggest that organizational communication affects the users' perception of victimization, attitudes in data protection, and accountability. Our study suggests some software design and legal implications to support users protecting themselves and developing better mental models of security breaches.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"236-245"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44619529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Support System for Etching Latte Art by Tracing Procedure Based on Projection Mapping 基于投影映射的描摹工艺蚀刻拉花支撑系统
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.28
Momoka Kawai, S. Kodama, Tokiichiro Takahashi
{"title":"Support System for Etching Latte Art by Tracing Procedure Based on Projection Mapping","authors":"Momoka Kawai, S. Kodama, Tokiichiro Takahashi","doi":"10.20380/GI2020.28","DOIUrl":"https://doi.org/10.20380/GI2020.28","url":null,"abstract":"A BSTRACT It is difficult for beginners to create well-balanced etched latte art pat- terns using two fluids with different viscosities, such as foamed milk and syrup. However, it is not easy to create well-balanced etched latte art even while watching process videos that show procedures. In this paper, we propose a system that supports beginners in creating well-balanced etched latte art by projecting the etching procedure directly onto a cappuccino. In addition, we examine the similarity between etched latte art and design templates using background subtraction. The experimental results show the progress in creating well-balanced etched latte art using our system.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"279-285"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43991697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QCue: Queries and Cues for Computer-Facilitated Mind-Mapping QCue:计算机辅助思维导图的查询和提示
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.14
Ting-Ju Chen, S. Subramanian, Vinayak R. Krishnamurthy
{"title":"QCue: Queries and Cues for Computer-Facilitated Mind-Mapping","authors":"Ting-Ju Chen, S. Subramanian, Vinayak R. Krishnamurthy","doi":"10.20380/GI2020.14","DOIUrl":"https://doi.org/10.20380/GI2020.14","url":null,"abstract":"We introduce a novel workflow, QCue , for providing textual stimula- tion during mind-mapping. Mind-mapping is a powerful tool whose intent is to allow one to externalize ideas and their relationships surrounding a central problem. The key challenge in mind-mapping is the di ffi culty in balancing the exploration of di ff erent aspects of the problem (breadth) with a detailed exploration of each of those aspects (depth). Our idea behind QCue is based on two mechanisms: (1) computer-generated automatic cues to stimulate the user to ex- plore the breadth of topics based on the temporal and topological evolution of a mind-map and (2) user-elicited queries for helping the user explore the depth for a given topic. We present a two-phase study wherein the first phase provided insights that led to the de- velopment of our work-flow for stimulating the user through cues and queries. In the second phase, we present a between-subjects evaluation comparing QCue with a digital mind-mapping work-flow without computer intervention. Finally, we present an expert rater evaluation of the mind-maps created by users in conjunction with user feedback.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"125-136"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48724986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信