统一意见:调整专家在衡量可访问性方面的评估

S. Mirri, P. Salomoni, L. Muratori, Matteo Battistelli
{"title":"统一意见:调整专家在衡量可访问性方面的评估","authors":"S. Mirri, P. Salomoni, L. Muratori, Matteo Battistelli","doi":"10.1145/2207016.2207023","DOIUrl":null,"url":null,"abstract":"Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range.\n In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.","PeriodicalId":339122,"journal":{"name":"International Cross-Disciplinary Conference on Web Accessibility","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Getting one voice: tuning up experts' assessment in measuring accessibility\",\"authors\":\"S. Mirri, P. Salomoni, L. Muratori, Matteo Battistelli\",\"doi\":\"10.1145/2207016.2207023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range.\\n In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.\",\"PeriodicalId\":339122,\"journal\":{\"name\":\"International Cross-Disciplinary Conference on Web Accessibility\",\"volume\":\"127 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Cross-Disciplinary Conference on Web Accessibility\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2207016.2207023\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Cross-Disciplinary Conference on Web Accessibility","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2207016.2207023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

Web可访问性评估通常是通过自动工具和人工评估来完成的。关于可访问性的度量致力于量化可访问性水平或可访问性障碍,从这些评估中提供数值综合。值得注意的是,虽然自动工具通常返回二进制值(意味着存在或不存在错误),但人工评估中的人工评估是主观的,可以从连续范围内获得值。在本文中,我们提出了一个模型,该模型考虑了多个人工评估,并提供了最终的单一值。特别地,我们已经设计并实现了我们之前度量BIF的扩展,称为cif,以评估这种模型的一致性和有效性。合适的工具和一组评估人员的合作正在支持我们提供我们的度量的第一个结果,并为未来的研究绘制有趣的线索。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Getting one voice: tuning up experts' assessment in measuring accessibility
Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range. In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信