Proceedings. Graphics Interface (Conference)最新文献

筛选
英文 中文
Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons 学习多重映射:用弦式快捷按钮评估干扰、转移和保留
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.21
C. Gutwin, Carl-Eike Hofmeister, David Ledo, Alix Goguey
{"title":"Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons","authors":"C. Gutwin, Carl-Eike Hofmeister, David Ledo, Alix Goguey","doi":"10.20380/GI2020.21","DOIUrl":"https://doi.org/10.20380/GI2020.21","url":null,"abstract":"Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of chorded buttons for augmenting input in mobile interactions.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"206-214"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42165604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Gedit: Keyboard Gestures for Mobile Text Editing 编辑:移动文本编辑的键盘手势
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.47
M. Zhang, J. Wobbrock
{"title":"Gedit: Keyboard Gestures for Mobile Text Editing","authors":"M. Zhang, J. Wobbrock","doi":"10.20380/GI2020.47","DOIUrl":"https://doi.org/10.20380/GI2020.47","url":null,"abstract":"Text editing on mobile devices can be a tedious process. To perform various editing operations, a user must repeatedly move his or her fingers between the text input area and the keyboard, making multiple round trips and breaking the flow of typing. In this work, we present Gedit , a system of on-keyboard gestures for convenient mobile text editing. Our design includes a ring gesture and flicks for cursor control, bezel gestures for mode switching, and four gesture shortcuts for copy, paste, cut, and undo. Variations of our gestures exist for one and two hands. We conducted an experiment to compare Gedit with the de facto touch+widget based editing interactions. Our results showed that Gedit ’s gestures were easy to learn, 24% and 17% faster than the de facto interactions for one-and two-handed use, respectively, and preferred by participants.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"470-473"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48571890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Exploring Video Conferencing for Doctor Appointments in the Home: A Scenario-Based Approach from Patients' Perspectives 探索家庭医生预约的视频会议:从患者角度出发的基于场景的方法
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.04
Dongqi Han, Yasamin Heshmat, Carman Neustaedter
{"title":"Exploring Video Conferencing for Doctor Appointments in the Home: A Scenario-Based Approach from Patients' Perspectives","authors":"Dongqi Han, Yasamin Heshmat, Carman Neustaedter","doi":"10.20380/GI2020.04","DOIUrl":"https://doi.org/10.20380/GI2020.04","url":null,"abstract":"We are beginning to see changes to health care systems where patients are now able to visit their doctor using video conferencing appointments. Yet we know little of how such systems should be designed to meet patients’ needs. We used a scenario-based design method with video prototyping and conducted patient-centered contextual interviews with people to learn about their reactions to futuristic video-based appointments. Results show that video-based appointments differ from face-toface consultations in terms of accessibility, relationship building, camera work, and privacy issues. These results illustrate design challenges for video calling systems that can support video-based appointments between doctors and patients with an emphasis on providing adequate camera control, support for showing empathy, and mitigating privacy concerns.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"32 1","pages":"17-27"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91151978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive Shape Based Brushing Technique for Trail Sets 基于形状的轨迹集交互式绘制技术
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.25
Almoctar Hassoumi, M. Lobo, Gabriel Jarry, Vsevolod Peysakhovich, C. Hurter
{"title":"Interactive Shape Based Brushing Technique for Trail Sets","authors":"Almoctar Hassoumi, M. Lobo, Gabriel Jarry, Vsevolod Peysakhovich, C. Hurter","doi":"10.20380/GI2020.25","DOIUrl":"https://doi.org/10.20380/GI2020.25","url":null,"abstract":"Brushing techniques have a long history with the first interactive selection tools appearing in the 1990s. Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. Existing techniques rely on trial and error combined with many view modifications such as panning, zooming, and selection refinements. For moving object analysis, recorded positions are connected into line segments forming trajectories and thus creating more occlusions and overplotting. As a solution for selection in cluttered views, this paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. The process can be described as follows. Firstly, the user brushes the region where trajectories of interest are visible (standard brushing technique). Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This brushing technique encompasses two types of comparison metrics, the piecewise Pearson correlation and the similarity measurement based on information geometry. To show the efficiency of this novel brushing method, we apply it to concrete scenarios with datasets from air traffic control, eye tracking, and GPS trajectories.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"246-255"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41704694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring the Design of Patient-Generated Data Visualizations 探索患者生成数据可视化的设计
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.36
F. Rajabiyazdi, Charles Perin, L. Oehlberg, Sheelagh Carpendale
{"title":"Exploring the Design of Patient-Generated Data Visualizations","authors":"F. Rajabiyazdi, Charles Perin, L. Oehlberg, Sheelagh Carpendale","doi":"10.20380/GI2020.36","DOIUrl":"https://doi.org/10.20380/GI2020.36","url":null,"abstract":"We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits. Aiming at understanding the healthcare providers’ attitudes towards reviewing patient-generated data, we (1) conducted a focus group with a mixed group of healthcare providers. Next, to gain the patients’ perspectives, we (2) interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected. Last, we (3) sought feedback on the visualization designs from healthcare providers who requested this exploration. We found four factors shaping patient-generated data: data & context, patient’s motivation, patient’s time commitment, and patient’s support circle. Informed by the results of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"362-373"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49654267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fine Feature Reconstruction in Point Clouds by Adversarial Domain Translation 对抗域翻译在点云精细特征重建中的应用
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.35
Prashant Raina, T. Popa, S. Mudur
{"title":"Fine Feature Reconstruction in Point Clouds by Adversarial Domain Translation","authors":"Prashant Raina, T. Popa, S. Mudur","doi":"10.20380/GI2020.35","DOIUrl":"https://doi.org/10.20380/GI2020.35","url":null,"abstract":"Point cloud neighborhoods are unstructured and often lacking in fine details, particularly when the original surface is sparsely sampled. This has motivated the development of methods for reconstructing these fine geometric features before the point cloud is converted into a mesh, usually by some form of upsampling of the point cloud. We present a novel data-driven approach to reconstructing fine details of the underlying surfaces of point clouds at the local neighborhood level, along with normals and locations of edges. This is achieved by an innovative application of recent advances in domain translation using GANs. We “translate” local neighborhoods between two domains: point cloud neighborhoods and triangular mesh neighborhoods. This allows us to obtain some of the benefits of meshes at training time, while still dealing with point clouds at the time of evaluation. By resampling the translated neighborhood, we can obtain a denser point cloud equipped with normals that allows the underlying surface to be easily reconstructed as a mesh. Our reconstructed meshes preserve fine details of the original surface better than the state of the art in point cloud upsampling techniques, even at different input resolutions. In addition, the trained GAN can generalize to operate on low resolution point clouds even without being explicitly trained on low-resolution data. We also give an example demonstrating that the same domain translation approach we use for reconstructing local neighborhood geometry can also be used to estimate a scalar field at the newly generated points, thus reducing the need for expensive recomputation of the scalar field on the dense point cloud.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"349-361"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47334372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Part-Based 3D Face Morphable Model with Anthropometric Local Control 基于局部控制的零件三维人脸变形模型
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.03
Donya Ghafourzadeh, Cyrus Rahgoshay, Sahel Fallahdoust, A. Beauchamp, Adeline Aubame, T. Popa, Eric Paquette
{"title":"Part-Based 3D Face Morphable Model with Anthropometric Local Control","authors":"Donya Ghafourzadeh, Cyrus Rahgoshay, Sahel Fallahdoust, A. Beauchamp, Adeline Aubame, T. Popa, Eric Paquette","doi":"10.20380/GI2020.03","DOIUrl":"https://doi.org/10.20380/GI2020.03","url":null,"abstract":"We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. The editing controls we provide to the user are intuitive as they are extracted from anthropometric measurements found in the literature. Out of a large set of possible anthropometric measurements, we filter those that have meaningful generative power given the face data set. We bind the measurements to the part-based 3DMM through mapping matrices derived from our data set of facial scans. Our part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part-based 3DMM approach has excellent generative properties and allows the user intuitive local control. *e-mail: donya.ghafourzadeh@ubisoft.com †e-mail: cyrus.rahgoshay@ubisoft.com ‡e-mail: sahel.fallahdoust@ubisoft.com §e-mail: andre.beauchamp@ubisoft.com ¶e-mail: adeline.aubame@ubisoft.com ||e-mail: tiberiu.popa@concordia.ca **e-mail: eric.paquette@etsmtl.ca","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"7-16"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47414783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance AuthAR:AR装配指导教程的并行编写
Proceedings. Graphics Interface (Conference) Pub Date : 2019-12-21 DOI: 10.20380/GI2020.43
Matt Whitlock, G. Fitzmaurice, Tovi Grossman, Justin Matejka
{"title":"AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance","authors":"Matt Whitlock, G. Fitzmaurice, Tovi Grossman, Justin Matejka","doi":"10.20380/GI2020.43","DOIUrl":"https://doi.org/10.20380/GI2020.43","url":null,"abstract":"Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of situated instructions. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"431-439"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43625758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Biologically-Inspired Gameplay: Movement Algorithms for Artificially Intelligent (AI) Non-Player Characters (NPC) 受生物启发的游戏:人工智能(AI)非玩家角色(NPC)的移动算法
Proceedings. Graphics Interface (Conference) Pub Date : 2019-06-01 DOI: 10.20380/GI2019.28
Rina R. Wehbe, G. Riberio, Kin Pon Fung, L. Nacke, E. Lank
{"title":"Biologically-Inspired Gameplay: Movement Algorithms for Artificially Intelligent (AI) Non-Player Characters (NPC)","authors":"Rina R. Wehbe, G. Riberio, Kin Pon Fung, L. Nacke, E. Lank","doi":"10.20380/GI2019.28","DOIUrl":"https://doi.org/10.20380/GI2019.28","url":null,"abstract":"In computer games, designers frequently leverage biologicallyinspired movement algorithms such as flocking, particle swarm optimization, and firefly algorithms to give players the perception of intelligent behaviour of groups of enemy non-player characters (NPCs). While extensive effort has been expended designing these algorithms, a comparison between biologically inspired algorithms and naive directional algorithms (travel towards the opponent) has yet to be completed. In this paper, we compare the biological algorithms listed above against a naive control algorithm to assess the effect that these algorithms have on various measures of player experience. The results reveal that the Swarming algorithm, followed closely by Flocking, provide the best gaming experience. However, players noted that the firefly algorithm was most salient. An understanding of the strengths of different behavioural algorithms for NPCs will contribute to the design of algorithms that depict more intelligent crowd behaviour in gaming and computer simulations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"28:1-28:9"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48085003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Frequency Analysis and Dual Hierarchy for Efficient Rendering of Subsurface Scattering 有效绘制地下散射的频率分析和对偶层次
Proceedings. Graphics Interface (Conference) Pub Date : 2019-06-01 DOI: 10.20380/GI2019.03
David Milaenen, Laurent Belcour, Jean-Philippe Guertin, T. Hachisuka, D. Nowrouzezahrai
{"title":"A Frequency Analysis and Dual Hierarchy for Efficient Rendering of Subsurface Scattering","authors":"David Milaenen, Laurent Belcour, Jean-Philippe Guertin, T. Hachisuka, D. Nowrouzezahrai","doi":"10.20380/GI2019.03","DOIUrl":"https://doi.org/10.20380/GI2019.03","url":null,"abstract":"BSSRDFs are commonly used to model subsurface light transport in highly scattering media such as skin and marble. Rendering with BSSRDFs requires an additional spatial integration, which can be significantly more expensive than surface-only rendering with BRDFs. We introduce a novel hierarchical rendering method that can mitigate this additional spatial integration cost. Our method has two key components: a novel frequency analysis of subsurface light transport, and a dual hierarchy over shading and illumination samples. Our frequency analysis predicts the spatial and angular variation of outgoing radiance due to a BSSRDF. We use this analysis to drive adaptive spatial BSSRDF integration with sparse image and illumination samples. We propose the use of a dual-tree structure that allows us to simultaneously traverse a tree of shade points (i.e., pixels) and a tree of object-space illumination samples. Our dualtree approach generalizes existing single-tree accelerations. Both our frequency analysis and the dual-tree structure are compatible with most existing BSSRDF models, and we show that our method improves rendering times compared to the state of the art method of Jensen and Buhler.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"3:1-3:7"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43982881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信