S. Mirri, P. Salomoni, L. Muratori, Matteo Battistelli
{"title":"Getting one voice: tuning up experts' assessment in measuring accessibility","authors":"S. Mirri, P. Salomoni, L. Muratori, Matteo Battistelli","doi":"10.1145/2207016.2207023","DOIUrl":null,"url":null,"abstract":"Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range.\n In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.","PeriodicalId":339122,"journal":{"name":"International Cross-Disciplinary Conference on Web Accessibility","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Cross-Disciplinary Conference on Web Accessibility","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2207016.2207023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Web accessibility evaluations are typically done by means of automatic tools and by humans' assessments. Metrics about accessibility are devoted to quantify accessibility level or accessibility barriers, providing numerical synthesis from such evaluations. It is worth noting that, while automatic tools usually return binary values (meant as the presence or the absence of an error), human assessment in manual evaluations are subjective and can get values from a continuous range.
In this paper we present a model which takes into account multiple manual evaluations and provides final single values. In particular, an extension of our previous metric BIF, called cBIF, has been designed and implemented to evaluate consistence and effectiveness of such a model. Suitable tools and the collaboration of a group of evaluators is supporting us to provide first results on our metric and is drawing interesting clues for future researches.