{"title":"A conversation with CHCCS 2018 achievement award winner Dr. Gordon Kurtenbach","authors":"G. Kurtenbach","doi":"10.20380/GI2018.01","DOIUrl":"https://doi.org/10.20380/GI2018.01","url":null,"abstract":"A 2018 CHCCS Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Gordon Kurtenbach for his many contributions to the field of human-computer interaction (HCI), especially his work on novel interaction techniques for gesture-based and pen-based interfaces, his leadership in building arguably the most successful industry-based computer science research group in Canada, his exemplary role promoting collaboration between universities and industry in Canada, and his active mentorship of some of the best young Canadian researchers in the field. CHCCS invites a publication by the award winner to be included in the proceedings, and this year we continue the tradition of an interview format rather than a formal paper. This permits a casual discussion of the research areas, insights, and contributions of the award winner. What follows is an edited transcript of a conversation between Gordon Kurtenbach and Kellogg Booth that took place in March 2018.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129711407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Torok, E. Eisemann, D. Trevisan, A. Montenegro, E. Clua
{"title":"PadCorrect: Correcting User Input on a Virtual Gamepad","authors":"Leonardo Torok, E. Eisemann, D. Trevisan, A. Montenegro, E. Clua","doi":"10.20380/GI2018.09","DOIUrl":"https://doi.org/10.20380/GI2018.09","url":null,"abstract":"The processing power of modern smartphones allows publishers to port old and current console titles to these platforms. However, these games were designed to be controlled with a traditional gamepad. Normally, the solution used in mobile ports is a virtual gamepad. This interface adds buttons that imitate the layout of a gamepad as a semi-transparent overlay above the game. While this allows users to play the game, it lacks the necessary haptic feedback to provide an enjoyable experience. Frequently, users will miss buttons or press the wrong ones, which affects the in-game performance and leads to frustration. We present a solution to correct user input. First, we retain a few frames of user input instead of passing the data directly to the game. Using time series analysis, we seek known patterns and detect potential mistakes from the user, correcting actions before the commands are received by the game. We called this new gamepad the PadCorrect. In order to measure the impact on the experience, we performed a user study comparing the PadCorrect with a traditional virtual gamepad. The test results showed a good reception and provided evidence that the new interface is capable of improving the experience with mobile games.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121220764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Couch: Investigating the Relationship between Aesthetics and Persuasion in a Mobile Application","authors":"Arda Aydin, A. Girouard","doi":"10.20380/GI2018.20","DOIUrl":"https://doi.org/10.20380/GI2018.20","url":null,"abstract":"Aesthetics, specifically visual appeal, is an important aspect of user experience. It is included as a principle in frameworks such as Fogg's Functional Triad and the Persuasive Systems Design. Yet, literature that directly investigates the influence of aesthetics on persuasion is limited, especially in the context of mobile applications. To understand how aesthetics influences persuasion if it includes the concept of operant conditioning, we designed a mobile app called Couch, which aims to reduce sedentary behaviour. We devised a 2x2 between-subject experiment, creating four versions of the app with two levels of aesthetics and two levels of persuasion (with and without). Measuring persuasion through self-reports, we found that higher levels of persuasion had a significant impact in reducing sedentary behaviour over aesthetics. However, visual appeal had no significant effect on persuasion. We comment on the level of visual appeal of the app and discuss the implications for future work.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos E. Tejada, Osamu Fujimoto, Zhiyuan Li, Daniel Ashbrook
{"title":"Blowhole: Blowing-Activated Tags for Interactive 3D-Printed Models","authors":"Carlos E. Tejada, Osamu Fujimoto, Zhiyuan Li, Daniel Ashbrook","doi":"10.20380/GI2018.18","DOIUrl":"https://doi.org/10.20380/GI2018.18","url":null,"abstract":"Interactive 3D models have the potential to enhance accessibility and education, but can be complex and time-consuming to produce. We present Blowhole, a technique for embedding blowing-activated tags into 3D-printed models to add interactivity. Requiring no special printing techniques, components, or assembly and working on consumer-level 3D printers, Blowhole adds acoustically resonant cavities to the interior of a model with unobtrusive openings at the surface of the object. A gentle blow into a hole produces a unique sound that identifies the hole, allowing a computer to provide associated content. We describe the theory behind Blowhole, characterize the performance of different cavity parameters, and describe our implementation, including easy-to-use software to automatically embed blowholes into preexisting models. We illustrate Blowhole's potential with multiple working examples.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125508534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Yarn: Generating Storyline Visualizations Using HTN Planning","authors":"Kalpesh Padia, K. Bandara, C. Healey","doi":"10.20380/GI2018.05","DOIUrl":"https://doi.org/10.20380/GI2018.05","url":null,"abstract":"Existing storyline visualization techniques represent narratives as a node-link graph where a sequence of links shows the evolution of causal and temporal relationships between characters in the narrative. These techniques make a number of simplifying assumptions about the narrative structure, however. They assume that all narratives progress linearly in time, with a well defined beginning, middle, and end. They assume that at least two participants interact at every event. Finally, they assume that all events in the narrative occur along a single timeline. Thus, while existing techniques are suitable for visualizing linear narratives, they are not well suited for visualizing narratives with multiple timelines, nor for narratives that contain events with only one participant. In this paper we present Yarn, a system for generating and visualizing narratives with multiple timelines. Along with multi-participant events, Yarn can also visualize single-participant events in the narrative. Additionally, Yarn enables pairwise comparison of the multiple narrative timelines.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"54 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131638951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teleportation without Spatial Disorientation Using Optical Flow Cues","authors":"Jiwan Bhandari, P. MacNeilage, Eelke Folmer","doi":"10.20380/GI2018.22","DOIUrl":"https://doi.org/10.20380/GI2018.22","url":null,"abstract":"Teleportation is a popular locomotion technique that lets users navigate beyond the confines of limited available positional tracking space. Because it discontinuously translates the viewpoint, it is considered a safe locomotion method because it doesn't generate any optical flow, and thus reduces the risk of vection induced VR sickness. Though the lack of optical flow minimizes VR sickness, it also limits path integration, e.g., estimating the total distance traveled, and which can lead to spatial disorientation. This paper evaluates a teleportation technique called Dash that quickly but continuously displaces the user's viewpoint and which retains some optical flow cues. A user study with 16 participants compares Dash to regular teleportation and found that it significantly improves path integration while there was no difference in VR sickness.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133553878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards the Visual Design of Non-Player Characters for Narrative Roles","authors":"Katja Rogers, Maria Aufheimer, M. Weber, L. Nacke","doi":"10.20380/GI2018.21","DOIUrl":"https://doi.org/10.20380/GI2018.21","url":null,"abstract":"Non-player characters (NPCs) serve important functions for game narratives and influence player immersion. However, the visual design of NPCs for specific narrative roles is often approached by relying on designers' previous experience or guesswork. We contribute to the understanding of player perception of narrative NPC roles in games, by proposing a methodological approach towards the visual design of NPCs to fit specific narrative roles. We demonstrate this approach through the visual design of characters for the three narrative roles of mentor, companion, and enemy. The results of an online survey (n=45) indicate trait expectations towards these narrative roles, and differences therein based on participant gender. Further, the characters were generally perceived as the targeted role based on visual design alone. This method of designing characters for narrative roles is beneficial to both game designers and researchers for further exploring effects of NPCs on player experience.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"38 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131934715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RepulsionPak: Deformation-Driven Element Packing with Repulsion Forces","authors":"Reza Adhitya Saputra, C. Kaplan, P. Asente","doi":"10.20380/GI2018.03","DOIUrl":"https://doi.org/10.20380/GI2018.03","url":null,"abstract":"We present a method to fill a container shape with deformable instances of geometric elements selected from a library, creating a 2D artistic composition called an element packing. Each element is represented as a mass-spring system, allowing them to deform to achieve a better fit with their neighbours and the container. We start with an initial random placement of small elements and gradually transform them using repulsion forces that trade off between the evenness of the packing and the deformations of the individual elements. Our method produces compositions in which the negative space between elements is approximately uniform in width, similar to real-world examples created by artists. We validate our approach by performing a quantitative study using spatial statistics.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MLS2: Sharpness Field Extraction Using CNN for Surface Reconstruction","authors":"Prashant Raina, S. Mudur, T. Popa","doi":"10.20380/GI2018.10","DOIUrl":"https://doi.org/10.20380/GI2018.10","url":null,"abstract":"We address the challenging problem of reconstructing surfaces with sharp features from unstructured and noisy point clouds. For smooth surfaces, moving least squares (MLS) has been a popular method. MLS variants for dealing with sharp features have been proposed, though they have not been as successful. Our take on this problem is very different. By training a convolutional neural network (CNN), we first derive a sharpness field parametrized over the underlying smooth proxy MLS surface. This field provides us two benefits - (i) it enables us to both detect and reconstruct sharp features, this time using an anisotropic MLS kernel, while preserving most of the MLS reconstruction method's properties, and (ii) unlike classification based methods, it does not require that sharp features be present only at input points. With just a small amount of training data, we demonstrate our results on a set of illustrative test cases and compare qualitatively and quantatively with results from MLS variants and the more recent PointNet deep learning network.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129576112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. John, E. Marbach, S. Lohmann, Florian Heimerl, T. Ertl
{"title":"MultiCloud: Interactive Word Cloud Visualization for the Analysis of Multiple Texts","authors":"M. John, E. Marbach, S. Lohmann, Florian Heimerl, T. Ertl","doi":"10.20380/GI2018.06","DOIUrl":"https://doi.org/10.20380/GI2018.06","url":null,"abstract":"Word Clouds have gained an impressive momentum for summarizing text documents in the last years. They visually communicate in a clear and descriptive way the most frequent words of a text. However, there are only very few word cloud visualizations that support a contrastive analysis of multiple documents. The available approaches provide comparable overviews of the documents, but have shortcomings regarding the layout, readability, and use of white space. To tackle these challenges, we propose MultiCloud, an approach to visualize multiple documents within a single word cloud in a comprehensible and visually appealing way. MultiCloud comprises several parameters and visual representations that enable users to alter the word cloud visualization in different aspects. Users can set parameters to optimize the usage of available space to get a visual representation that provides an easy visual association of words with the different documents. We evaluated MultiCloud with visualization researchers and a group of domain experts comprising five humanities scholars.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130900234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}