{"title":"gMotion: A Spatio-Temporal Grammar for the Procedural Generation of Motion Graphics","authors":"Edoardo Carra, Christian Santoni, F. Pellacini","doi":"10.20380/GI2018.14","DOIUrl":"https://doi.org/10.20380/GI2018.14","url":null,"abstract":"Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122369922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mouse Cursor Movements towards Targets on the Same Screen Edge","authors":"Shota Yamanaka","doi":"10.20380/GI2018.16","DOIUrl":"https://doi.org/10.20380/GI2018.16","url":null,"abstract":"Buttons and icons on screen edges can be selected in a shorter time than those in the central area because the mouse cursor stops due to the impenetrable borderline. However, we have concerns regarding such edge targets, in particular, pointing to an edge target from another edge target on the same edge. For example, users would move the mouse toward outside the screen; thus, the virtual travel distance of the cursor including off-screen movements would be longer. In this study, we empirically confirmed that users exhibit such “pushing-edge” behavior, and 3% of cursor movements are wasted in off-screen movements. We also report how current user-performance models (variations of Fitts' law) can capture such pointing motions between targets on the same edge. The results show that the baseline model (Shannon formula) shows a reasonably high fit (R2 = 0.959), and bivariate pointing models show higher fitness (R2 = 0.966 at most).","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"570 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123041404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EZCursorVR: 2D Selection with Virtual Reality Head-Mounted Displays","authors":"Adrian Ramcharitar, Robert J. Teather","doi":"10.20380/GI2018.17","DOIUrl":"https://doi.org/10.20380/GI2018.17","url":null,"abstract":"We present an evaluation of a new selection technique for virtual reality (VR) systems presented on head-mounted displays. The technique, dubbed EZCursorVR, presents a 2D cursor that moves in a head-fixed plane, simulating 2D desktop-like cursor control for VR. The cursor can be controlled by any 2DOF input device, but also works with 3/6DOF devices using appropriate mappings. We conducted an experiment based on ISO 9241-9, comparing the effectiveness of EZCursorVR using a mouse, a joystick in both velocity-control and position-control mappings, a 2D-constrained ray-based technique, a standard 3D ray, and finally selection via head motion. Results indicate that the mouse offered the highest performance in terms of throughput, movement time, and error rate, while the position-control joystick was worst. The 2D-constrained ray-casting technique proved an effective alternative to the mouse when performing selections using EZCursorVR, offering better performance than standard ray-based selection.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"105 4-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132330905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supporting Chinese Character Educational Interfaces with Richer Assessment Feedback through Sketch Recognition","authors":"Tianshu Chu, Paul Taele, T. Hammond","doi":"10.20380/GI2018.08","DOIUrl":"https://doi.org/10.20380/GI2018.08","url":null,"abstract":"Students of Chinese as a Second Language (CSL) with primarily English fluency often struggle with the language's complex character set. Conventional classroom pedagogy and relevant educational applications have focused on providing valuable assessment feedback to address their challenges, but rely on direct instructor observation and provide constrained assessment, respectively. We propose improved sketch recognition techniques to better support Chinese character educational interfaces' realtime assessment of novice CSL students' character writing. Based on successful assessment feedback approaches from existing educational resources, we developed techniques for supporting richer automated assessment, so that students may be better informed of their writing performance outside the classroom. From our evaluations, our techniques achieved recognition rates of 91% and 85% on expert and novice Chinese character handwriting data, respectively, greater than 90% recognition rate on written technique mistakes, and 80.4% f-measure on distinguishing between expert and novice handwriting samples, without sacrificing students' natural writing input of Chinese characters.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126682107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Viewpoint Snapping to Reduce Cybersickness in Virtual Reality","authors":"Yasin Farmani, Robert J. Teather","doi":"10.20380/GI2018.23","DOIUrl":"https://doi.org/10.20380/GI2018.23","url":null,"abstract":"Cybersickness in virtual reality (VR) is an on-going problem, despite recent advances in technology. In this paper, we propose a method for reducing the likelihood of cybersickness onset when using stationary (e.g., seated) VR setups. Our approach relies on reducing optic flow via inconsistent displacement - the viewpoint is “snapped” during fast movement that would otherwise induce cybersickness. We compared our technique, which we call viewpoint snapping, to a control condition without viewpoint snapping, in a custom-developed VR first-person shooter game. We measured participant cybersickness levels via the Simulator Sickness Questionnaire (SSQ), and user reported levels of nausea, presence, and objective error rate. Overall, our results indicate that viewpoint snapping significantly reduced SSQ reported cybersickness levels by about 40% and resulted in a reduction in participant nausea levels, especially with longer VR exposure. Presence levels and error rate were not significantly different between the viewpoint snapping and the control condition.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"It's the Gesture That (re)Counts: Annotating While Running to Recall Affective Experience","authors":"Felwah Alqahtani, Derek F. Reilly","doi":"10.20380/GI2018.12","DOIUrl":"https://doi.org/10.20380/GI2018.12","url":null,"abstract":"We present results from a study exploring whether gestural annotations of felt emotion presented on a map-based visualization can support recall of affective experience during recreational runs. We compare gestural annotations with audio and video notes and a “mental note” baseline. In our study, 20 runners were asked to record their emotional state at regular intervals while running a familiar route. Each runner used one of the four methods to capture emotion over four separate runs. Five days after the last run, runners used an interactive map-based visualization to review and recall their running experiences. Results indicate that gestural annotation promoted recall of affective experience more effectively than the baseline condition, as measured by confidence in recall and detail provided. Gestural annotation was also comparable to video and audio annotation in terms of recollection confidence and detail. Audio annotation supported recall primarily through the runner's spoken annotation, but sound in the background was sometimes used. Video annotation yielded the most detail, much directly related to visual cues in the video, however using video annotations required runners to stop during their runs. Given these results we propose that background logging of ambient sounds and video may supplement gestural annotation.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128134238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A conversation with CHCCS 2018 achievement award winner Alla Sheffer","authors":"A. Sheffer","doi":"10.20380/GI2018.02","DOIUrl":"https://doi.org/10.20380/GI2018.02","url":null,"abstract":"A 2018 CHCCS Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Alla Sheffer for her numerous highly impactful contributions to the field of computer graphics research. Her diverse research addresses geometric modeling and processing problems both in traditional computer graphics settings and in multiple other application domains, including product design, mechanical and civil engineering, and fashion design. CHCCS invites a publication by the award winner to be included in the proceedings, and this year we continue the tradition of an interview format rather than a formal paper. This permits a casual discussion of the research areas, insights, and contributions of the award winner. What follows is an edited transcript of a conversation between Alla Sheffer and Paul Kry that took place on 13 March, 2018, via Skype.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130941969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adding Motion Blur to Still Images","authors":"Xuejiao Luo, Nestor Z. Salamon, E. Eisemann","doi":"10.20380/GI2018.15","DOIUrl":"https://doi.org/10.20380/GI2018.15","url":null,"abstract":"Motion blur appears in images as a visible trail along the motion path of the recorded object. It plays an important role in photography to convey a sense of motion but can be difficult to acquire as intended by the photographer. One solution is to add motion blur in a post process but current solutions involve much manual intervention and can lead to artifacts that mix moving and static objects incorrectly. In this paper, we propose a novel method to add motion blur to a single image that generates the illusion of a photographed motion. Relying on a minimal user input, a filtering process is employed to produce a virtual motion effect. It carefully treats object boundaries to avoid artifacts produced by standard filtering methods. We illustrate the effectiveness of our solution with various complex examples, including multiple objects, reflections and high intensity light sources. Our post-processing solution can achieve a convincing outcome, which makes it an alternative to attempting to capture the intended real-world motion blur.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128391759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kourosh Naderi, Jari Takatalo, J. Lipsanen, Perttu Hämäläinen
{"title":"Computer-Aided Imagery in Sport and Exercise: A Case Study of Indoor Wall Climbing","authors":"Kourosh Naderi, Jari Takatalo, J. Lipsanen, Perttu Hämäläinen","doi":"10.20380/GI2018.13","DOIUrl":"https://doi.org/10.20380/GI2018.13","url":null,"abstract":"Movement artificial intelligence of simulated humanoid characters has been advancing rapidly through joint efforts of the computer animation, robotics, and machine learning communitites. However, practical real-life applications are still rare. We propose applying the technology to mental practice in sports, which we denote as computer-aided imagery (CAI). Imagery, i.e., rehearsing the task in one's mind, is a difficult cognitive skill that requires accurate mental simulation; we present a novel interactive computational sport simulation for exploring and planning movements and strategies. We utilize a fully physically-based avatar with motion optimization that is not limited by a movement dataset, and customize the avatar with computer vision measurements of user's body. We evaluate the approach with 20 users in preparing for real-life wall climbing. Our results indicate that the approach is promising and can affect body awareness and feelings of competence. However, more research is needed to achieve accurate enough simulation for both gross-motor body movements and fine-motor control of the myriad ways in which climbers can grasp climbing holds or shapes.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126642731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Control and Personalization:Younger versus Older Users' Experience of Notifications","authors":"Izabelle Janzen, F. Vitale, Joanna McGrenere","doi":"10.20380/GI2018.19","DOIUrl":"https://doi.org/10.20380/GI2018.19","url":null,"abstract":"With the increasing ubiquity of mobile technology, users are more connected than ever. Notifications facilitate prompt connections to friends, family and work, but also distract us from what we're doing. We investigated how older and younger users thought about, interacted with, and personalized their notifications. We took a qualitative approach, conducting semi-structured interviews primed through a notification categorization activity. We interviewed 20 participants with equal numbers of younger (19-30 years old) and older (48-74) adults. We extend and refine previous qualitative work and show that while enjoyment plays a minor role in the experience of notifications, urgency, directness and social closeness are far more important factors, though context remains a nuanced issue. We found that older users especially desired a sense of control over their notifications that was difficult to achieve with current technology. Lastly, we provide information about what “categories” of notifications users perceive and expand how that can be used in new personalization systems. These results lead us to advocate a number of fundamental changes to how notifications are personalized.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}