基于指南的Web可读性评估

Aliaksei Miniukovich, M. Scaltritti, Simone Sulpizio, A. D. Angeli
{"title":"基于指南的Web可读性评估","authors":"Aliaksei Miniukovich, M. Scaltritti, Simone Sulpizio, A. D. Angeli","doi":"10.1145/3290605.3300738","DOIUrl":null,"url":null,"abstract":"Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines - particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":"{\"title\":\"Guideline-Based Evaluation of Web Readability\",\"authors\":\"Aliaksei Miniukovich, M. Scaltritti, Simone Sulpizio, A. D. Angeli\",\"doi\":\"10.1145/3290605.3300738\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines - particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.\",\"PeriodicalId\":20454,\"journal\":{\"name\":\"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"27\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3290605.3300738\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3290605.3300738","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

摘要

尽管设计师有大量的可读性指导方针,但对于许多Web用户来说,轻松阅读仍然是一个问题。本文对网页评价中39条可读性准则的手动和自动使用进行了研究。这项研究收集了一组50个网页的基本可读性,使用眼动追踪技术对普通读者和有阅读障碍的读者进行了调查(n = 79)。然后,它将事实与基于人类(n = 35)和自动评估相匹配。结果证实了22条准则与可读性有关。人工和自动结果之间的比较也揭示了一个复杂的框架:算法在评估特定指导方针下的网页方面比人类专家更好,或者和人类专家一样好——尤其是那些关于网页易读性和文本格式的低级特征。然而,多种指导方针仍然需要人类对网页内容的理解和解释进行判断。这些结果有助于指导分类,为未来的设计评估方法奠定基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Guideline-Based Evaluation of Web Readability
Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines - particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信