Shirin Feiz, Anatoliy Borodin, Xiaojun Bi, I V Ramakrishnan
{"title":"Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant.","authors":"Shirin Feiz, Anatoliy Borodin, Xiaojun Bi, I V Ramakrishnan","doi":"10.20380/GI2021.18","DOIUrl":"https://doi.org/10.20380/GI2021.18","url":null,"abstract":"<p><p>We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"156-165"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8857727/pdf/nihms-1777375.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, Ramakrishnan, Xiaojun Bi
{"title":"BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection.","authors":"Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, Ramakrishnan, Xiaojun Bi","doi":"10.20380/GI2021.35","DOIUrl":"https://doi.org/10.20380/GI2021.35","url":null,"abstract":"<p><p>Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"231-240"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8853835/pdf/nihms-1777407.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. James, A. Bezerianos, O. Chapuis, Maxime Cordeil, Tim Dwyer, Arnaud Prouzeau
{"title":"Personal+Context navigation: combining AR and shared displays in network path-following","authors":"R. James, A. Bezerianos, O. Chapuis, Maxime Cordeil, Tim Dwyer, Arnaud Prouzeau","doi":"10.20380/GI2020.27","DOIUrl":"https://doi.org/10.20380/GI2020.27","url":null,"abstract":"Shared displays are well suited to public viewing and collaboration, however they lack personal space to view private information and act without disturbing others. Combining them with Augmented Reality (AR) headsets allows interaction without altering the context on the shared display. We study a set of such interaction techniques in the context of network navigation, in particular path following, an important network analysis task. Applications abound, for example planning private trips on a network map shown on a public display.The proposed techniques allow for hands-free interaction, rendering visual aids inside the headset, in order to help the viewer maintain a connection between the AR cursor and the network that is only shown on the shared display. In two experiments on path following, we found that adding persistent connections between the AR cursor and the network on the shared display works well for high precision tasks, but more transient connections work best for lower precision tasks. More broadly, we show that combining personal AR interaction with shared displays is feasible for network navigation.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"267-278"},"PeriodicalIF":0.0,"publicationDate":"2020-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45877125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assistance for Target Selection in Mobile Augmented Reality","authors":"Vinod Asokan, Scott Bateman, Anthony Tang","doi":"10.20380/GI2020.07","DOIUrl":"https://doi.org/10.20380/GI2020.07","url":null,"abstract":"Mobile augmented reality – where a mobile device is used to view and interact with virtual objects displayed in the real world – is becoming more common. Target selection is the main method of interaction in mobile AR, but is particularly difficult because targets in AR can have challenging characteristics such as moving or being occluded (by digital or real world objects). To address this problem, we conduct a comparative study of target assistance techniques designed for mobile AR. We compared four different cursor-based selection techniques against the standard touch-to-select interaction, finding that a newly adapted Bubble Cursorbased technique performs consistently best across a range of five target characteristics. Our work provides new findings demonstrating the promise of cursor-based target assistance in mobile AR.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"56-65"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Sitompul, Rikard Lindell, Markus Wallmyr, Antti Siren
{"title":"Presenting Information Closer to Mobile Crane Operators' Line of Sight: Designing and Evaluating Visualisation Concepts Based on Transparent Displays","authors":"T. Sitompul, Rikard Lindell, Markus Wallmyr, Antti Siren","doi":"10.20380/GI2020.41","DOIUrl":"https://doi.org/10.20380/GI2020.41","url":null,"abstract":"We have investigated the visualization of safety information for mobile crane operations utilizing transparent displays, where the information can be presented closer to operators’ line of sight with minimum obstruction on their view. The intention of the design is to help operators in acquiring supportive information provided by the machine, without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to know in order to perform safe operations. Using the findings from the safety guidelines review, we then conducted a design workshop to generate design ideas and visualisation concepts, as well as to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity paper prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators’ line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"413-422"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43923433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav
{"title":"Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns","authors":"Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav","doi":"10.20380/GI2020.29","DOIUrl":"https://doi.org/10.20380/GI2020.29","url":null,"abstract":"shapes which we call bi-axial woven tiles. Our framework is based on a unique combina- tion of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"286-298"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44107132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture","authors":"Toshiya Isomoto, Shota Yamanaka, B. Shizuki","doi":"10.20380/GI2020.26","DOIUrl":"https://doi.org/10.20380/GI2020.26","url":null,"abstract":"We show a gaze-based command activation technique that is robust to unintentional command activations using a series of manipulation of dwelling on a target and performing a gesture (dwell-then-gesture manipulation). The gesture we adopt is a simple two-level stroke, which consists of a sequence of two orthogonal strokes. To achieve robustness against unintentional command activations, we design and fine-tune a gesture detection system based on how users move their gaze revealed through three experiments. Although our technique seems to just combine well-known dwell-based and gesture-based manipulations and to not be enough success rate, our work will be the first work that enriches the vocabulary, which is as much as mouse-based interaction.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"256-266"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49412142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars","authors":"Febi Chajadi, Md. Sami Uddin, C. Gutwin","doi":"10.20380/GI2020.12","DOIUrl":"https://doi.org/10.20380/GI2020.12","url":null,"abstract":"Learnability is important in graphical interfaces because it supports the user’s transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable – but current “flat” and “subtle” designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons’ representations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"103-113"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46869580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola
{"title":"AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality","authors":"Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola","doi":"10.20380/GI2020.34","DOIUrl":"https://doi.org/10.20380/GI2020.34","url":null,"abstract":"In this paper we present AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, users find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. AffordIt! offers an intuitive solution that allows a user to select a region of interest for the mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. In the study we utilize two mesh cutter shapes that select a region of interest and two movement behaviors that a user then assigns to a common household object. The results show high usability with low workload ratings, demonstrating the feasibility of AffordIt! as a valuable 3D authoring tool. Based on these initial results we also present a road-map of future work that will improve the tool in future iterations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"340-348"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42001090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}