Shirin Feiz, Anatoliy Borodin, Xiaojun Bi, I V Ramakrishnan
{"title":"Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant.","authors":"Shirin Feiz, Anatoliy Borodin, Xiaojun Bi, I V Ramakrishnan","doi":"10.20380/GI2021.18","DOIUrl":"https://doi.org/10.20380/GI2021.18","url":null,"abstract":"<p><p>We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"156-165"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8857727/pdf/nihms-1777375.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, Ramakrishnan, Xiaojun Bi
{"title":"BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection.","authors":"Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, Ramakrishnan, Xiaojun Bi","doi":"10.20380/GI2021.35","DOIUrl":"https://doi.org/10.20380/GI2021.35","url":null,"abstract":"<p><p>Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"231-240"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8853835/pdf/nihms-1777407.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assistance for Target Selection in Mobile Augmented Reality","authors":"Vinod Asokan, Scott Bateman, Anthony Tang","doi":"10.20380/GI2020.07","DOIUrl":"https://doi.org/10.20380/GI2020.07","url":null,"abstract":"Mobile augmented reality – where a mobile device is used to view and interact with virtual objects displayed in the real world – is becoming more common. Target selection is the main method of interaction in mobile AR, but is particularly difficult because targets in AR can have challenging characteristics such as moving or being occluded (by digital or real world objects). To address this problem, we conduct a comparative study of target assistance techniques designed for mobile AR. We compared four different cursor-based selection techniques against the standard touch-to-select interaction, finding that a newly adapted Bubble Cursorbased technique performs consistently best across a range of five target characteristics. Our work provides new findings demonstrating the promise of cursor-based target assistance in mobile AR.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"56-65"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav
{"title":"Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns","authors":"Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav","doi":"10.20380/GI2020.29","DOIUrl":"https://doi.org/10.20380/GI2020.29","url":null,"abstract":"shapes which we call bi-axial woven tiles. Our framework is based on a unique combina- tion of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"286-298"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44107132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture","authors":"Toshiya Isomoto, Shota Yamanaka, B. Shizuki","doi":"10.20380/GI2020.26","DOIUrl":"https://doi.org/10.20380/GI2020.26","url":null,"abstract":"We show a gaze-based command activation technique that is robust to unintentional command activations using a series of manipulation of dwelling on a target and performing a gesture (dwell-then-gesture manipulation). The gesture we adopt is a simple two-level stroke, which consists of a sequence of two orthogonal strokes. To achieve robustness against unintentional command activations, we design and fine-tune a gesture detection system based on how users move their gaze revealed through three experiments. Although our technique seems to just combine well-known dwell-based and gesture-based manipulations and to not be enough success rate, our work will be the first work that enriches the vocabulary, which is as much as mouse-based interaction.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"256-266"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49412142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars","authors":"Febi Chajadi, Md. Sami Uddin, C. Gutwin","doi":"10.20380/GI2020.12","DOIUrl":"https://doi.org/10.20380/GI2020.12","url":null,"abstract":"Learnability is important in graphical interfaces because it supports the user’s transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable – but current “flat” and “subtle” designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons’ representations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"103-113"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46869580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola
{"title":"AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality","authors":"Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola","doi":"10.20380/GI2020.34","DOIUrl":"https://doi.org/10.20380/GI2020.34","url":null,"abstract":"In this paper we present AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, users find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. AffordIt! offers an intuitive solution that allows a user to select a region of interest for the mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. In the study we utilize two mesh cutter shapes that select a region of interest and two movement behaviors that a user then assigns to a common household object. The results show high usability with low workload ratings, demonstrating the feasibility of AffordIt! as a valuable 3D authoring tool. Based on these initial results we also present a road-map of future work that will improve the tool in future iterations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"340-348"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42001090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations","authors":"Shota Yamanaka","doi":"10.20380/GI2020.44","DOIUrl":"https://doi.org/10.20380/GI2020.44","url":null,"abstract":"For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts’ law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 800 ms is required to remove negative effects of distractor for densely arranged targets, but we found no optimal gap.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"440-451"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43596923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of Body-Referenced Graphical Menus in Virtual Environments","authors":"Irina Lediaeva, J. Laviola","doi":"10.20380/GI2020.31","DOIUrl":"https://doi.org/10.20380/GI2020.31","url":null,"abstract":"Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"308-316"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}