Ather Sharif, Andrew M. Zhang, Anna Shih, J. Wobbrock, Katharina Reinecke
{"title":"Understanding and Improving Information Extraction From Online Geospatial Data Visualizations for Screen-Reader Users","authors":"Ather Sharif, Andrew M. Zhang, Anna Shih, J. Wobbrock, Katharina Reinecke","doi":"10.1145/3517428.3550363","DOIUrl":null,"url":null,"abstract":"Prior work has studied the interaction experiences of screen-reader users with simple online data visualizations (e.g., bar charts, line graphs, scatter plots), highlighting the disenfranchisement of screen-reader users in accessing information from these visualizations. However, the interactions of screen-reader users with online geospatial data visualizations, commonly used by visualization creators to represent geospatial data (e.g., COVID-19 cases per US state), remain unexplored. In this work, we study the interactions of and information extraction by screen-reader users from online geospatial data visualizations. Specifically, we conducted a user study with 12 screen-reader users to understand the information they seek from online geospatial data visualizations and the questions they ask to extract that information. We utilized our findings to generate a taxonomy of information sought from our participants’ interactions. Additionally, we extended the functionalities of VoxLens—an open-source multi-modal solution that improves data visualization accessibility—to enable screen-reader users to extract information from online geospatial data visualizations.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3517428.3550363","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Prior work has studied the interaction experiences of screen-reader users with simple online data visualizations (e.g., bar charts, line graphs, scatter plots), highlighting the disenfranchisement of screen-reader users in accessing information from these visualizations. However, the interactions of screen-reader users with online geospatial data visualizations, commonly used by visualization creators to represent geospatial data (e.g., COVID-19 cases per US state), remain unexplored. In this work, we study the interactions of and information extraction by screen-reader users from online geospatial data visualizations. Specifically, we conducted a user study with 12 screen-reader users to understand the information they seek from online geospatial data visualizations and the questions they ask to extract that information. We utilized our findings to generate a taxonomy of information sought from our participants’ interactions. Additionally, we extended the functionalities of VoxLens—an open-source multi-modal solution that improves data visualization accessibility—to enable screen-reader users to extract information from online geospatial data visualizations.