Proceedings. Graphics Interface (Conference)最新文献

筛选
英文 中文
Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant. 让盲人用可穿戴智能手机助手填写纸质表格。
Proceedings. Graphics Interface (Conference) Pub Date : 2021-05-01 DOI: 10.20380/GI2021.18
Shirin Feiz, Anatoliy Borodin, Xiaojun Bi, I V Ramakrishnan
{"title":"Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant.","authors":"Shirin Feiz,&nbsp;Anatoliy Borodin,&nbsp;Xiaojun Bi,&nbsp;I V Ramakrishnan","doi":"10.20380/GI2021.18","DOIUrl":"https://doi.org/10.20380/GI2021.18","url":null,"abstract":"<p><p>We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"156-165"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8857727/pdf/nihms-1777375.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection. 贝叶斯注视:一种基于眼睛注视的目标选择贝叶斯方法。
Proceedings. Graphics Interface (Conference) Pub Date : 2021-05-01 DOI: 10.20380/GI2021.35
Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, Ramakrishnan, Xiaojun Bi
{"title":"BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection.","authors":"Zhi Li,&nbsp;Maozheng Zhao,&nbsp;Yifan Wang,&nbsp;Sina Rashidian,&nbsp;Furqan Baig,&nbsp;Rui Liu,&nbsp;Wanyu Liu,&nbsp;Michel Beaudouin-Lafon,&nbsp;Brooke Ellison,&nbsp;Fusheng Wang,&nbsp;Ramakrishnan,&nbsp;Xiaojun Bi","doi":"10.20380/GI2021.35","DOIUrl":"https://doi.org/10.20380/GI2021.35","url":null,"abstract":"<p><p>Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"231-240"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8853835/pdf/nihms-1777407.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Interactive Exploration of Genomic Conservation 基因组保护的互动探索
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.09
V. Bandi, C. Gutwin
{"title":"Interactive Exploration of Genomic Conservation","authors":"V. Bandi, C. Gutwin","doi":"10.20380/GI2020.09","DOIUrl":"https://doi.org/10.20380/GI2020.09","url":null,"abstract":"Comparative analysis in genomics involves comparing two or more genomes to identify conserved genetic information. These duplicated regions can indicate shared ancestry and can shed light on an organism’s internal functions and evolutionary history. Due to rapid advances in sequencing technology, high-resolution genome data is now available for a wide range of species, and comparative analysis of this data can provide insights that can be applied in medicine, plant breeding, and many other areas. Comparative genomics is a strongly interactive task, and visualizing the location, size, and orientation of conserved regions can assist researchers by supporting critical activities of interpretation and judgement. However, visualization tools for the analysis of conserved regions have not Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. GI’20, May 21–22, 2020, Toronto, ON, Canada © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-6708-0/20/04. . . $15.00 DOI: https://doi.org/10.1145/3313831.XXXXXXX kept pace with the increasing availability of genomic information and the new ways in which this data is being used by biological researchers. To address this gap, we gathered requirements for interactive exploration from three groups of expert genomic scientists, and developed a web-based tool with novel interaction techniques and visual representations to meet those needs. Our tool supports multi-resolution analysis, provides interactive filtering as researchers move deeper into the genome, supports revisitation to specific interface configurations, and enables loosely-coupled collaboration over the genomic data. An evaluation of the system with five researchers from three expert groups provides evidence about the success of our system’s novel techniques for supporting interactive exploration of genomic conservation.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"74-83"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41741499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Assistance for Target Selection in Mobile Augmented Reality 移动增强现实中的目标选择辅助
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.07
Vinod Asokan, Scott Bateman, Anthony Tang
{"title":"Assistance for Target Selection in Mobile Augmented Reality","authors":"Vinod Asokan, Scott Bateman, Anthony Tang","doi":"10.20380/GI2020.07","DOIUrl":"https://doi.org/10.20380/GI2020.07","url":null,"abstract":"Mobile augmented reality – where a mobile device is used to view and interact with virtual objects displayed in the real world – is becoming more common. Target selection is the main method of interaction in mobile AR, but is particularly difficult because targets in AR can have challenging characteristics such as moving or being occluded (by digital or real world objects). To address this problem, we conduct a comparative study of target assistance techniques designed for mobile AR. We compared four different cursor-based selection techniques against the standard touch-to-select interaction, finding that a newly adapted Bubble Cursorbased technique performs consistently best across a range of five target characteristics. Our work provides new findings demonstrating the promise of cursor-based target assistance in mobile AR.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"56-65"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns 双轴编织瓦:基于双轴编织图案对称性的互锁空间填充形状
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.29
Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav
{"title":"Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns","authors":"Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav","doi":"10.20380/GI2020.29","DOIUrl":"https://doi.org/10.20380/GI2020.29","url":null,"abstract":"shapes which we call bi-axial woven tiles. Our framework is based on a unique combina- tion of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"286-298"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44107132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture 基于凝视的命令激活技术,使用停留然后手势抵抗意外激活
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.26
Toshiya Isomoto, Shota Yamanaka, B. Shizuki
{"title":"Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture","authors":"Toshiya Isomoto, Shota Yamanaka, B. Shizuki","doi":"10.20380/GI2020.26","DOIUrl":"https://doi.org/10.20380/GI2020.26","url":null,"abstract":"We show a gaze-based command activation technique that is robust to unintentional command activations using a series of manipulation of dwelling on a target and performing a gesture (dwell-then-gesture manipulation). The gesture we adopt is a simple two-level stroke, which consists of a sequence of two orthogonal strokes. To achieve robustness against unintentional command activations, we design and fine-tune a gesture detection system based on how users move their gaze revealed through three experiments. Although our technique seems to just combine well-known dwell-based and gesture-based manipulations and to not be enough success rate, our work will be the first work that enriches the vocabulary, which is as much as mouse-based interaction.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"256-266"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49412142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars 视觉显著性对图标工具栏学习和检索的影响
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.12
Febi Chajadi, Md. Sami Uddin, C. Gutwin
{"title":"Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars","authors":"Febi Chajadi, Md. Sami Uddin, C. Gutwin","doi":"10.20380/GI2020.12","DOIUrl":"https://doi.org/10.20380/GI2020.12","url":null,"abstract":"Learnability is important in graphical interfaces because it supports the user’s transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable – but current “flat” and “subtle” designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons’ representations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"103-113"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46869580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality 负担!虚拟现实中创建对象组件行为的工具
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.34
Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola
{"title":"AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality","authors":"Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola","doi":"10.20380/GI2020.34","DOIUrl":"https://doi.org/10.20380/GI2020.34","url":null,"abstract":"In this paper we present AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, users find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. AffordIt! offers an intuitive solution that allows a user to select a region of interest for the mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. In the study we utilize two mesh cutter shapes that select a region of interest and two movement behaviors that a user then assigns to a common household object. The results show high usability with low workload ratings, demonstrating the feasibility of AffordIt! as a valuable 3D authoring tool. Based on these initial results we also present a road-map of future work that will improve the tool in future iterations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"340-348"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42001090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations 评估避免超调鼠标指向操作中的时间延迟和空间间隙
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.44
Shota Yamanaka
{"title":"Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations","authors":"Shota Yamanaka","doi":"10.20380/GI2020.44","DOIUrl":"https://doi.org/10.20380/GI2020.44","url":null,"abstract":"For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts’ law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 800 ms is required to remove negative effects of distractor for densely arranged targets, but we found no optimal gap.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"440-451"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43596923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluation of Body-Referenced Graphical Menus in Virtual Environments 虚拟环境中人体参考图形菜单的评价
Proceedings. Graphics Interface (Conference) Pub Date : 2020-04-04 DOI: 10.20380/GI2020.31
Irina Lediaeva, J. Laviola
{"title":"Evaluation of Body-Referenced Graphical Menus in Virtual Environments","authors":"Irina Lediaeva, J. Laviola","doi":"10.20380/GI2020.31","DOIUrl":"https://doi.org/10.20380/GI2020.31","url":null,"abstract":"Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"308-316"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信