L. Talero-Sarmiento, Marc Gonzalez-Capdevila, Antoni Granollers, Henry Lamos-Diaz, Karine Pistili-Rodrigues
{"title":"Towards a Refined Heuristic Evaluation: Incorporating Hierarchical Analysis for Weighted Usability Assessment","authors":"L. Talero-Sarmiento, Marc Gonzalez-Capdevila, Antoni Granollers, Henry Lamos-Diaz, Karine Pistili-Rodrigues","doi":"10.3390/bdcc8060069","DOIUrl":null,"url":null,"abstract":"This study explores the implementation of the analytic hierarchy process in usability evaluations, specifically focusing on user interface assessment during software development phases. Addressing the challenge of diverse and unstandardized evaluation methodologies, our research develops and applies a tailored algorithm that simplifies heuristic prioritization. This novel method combines the analytic hierarchy process framework with a bespoke algorithm that leverages transitive properties for efficient pairwise comparisons, significantly reducing the evaluative workload. The algorithm is designed to facilitate the estimation of heuristic relevance regardless of the number of items per heuristic or the item scale, thereby streamlining the evaluation process. Rigorous simulation testing of this tailored algorithm is complemented by its empirical application, where seven usability experts evaluate a web interface. This practical implementation demonstrates our method’s ability to decrease the necessary comparisons and simplify the complexity and workload associated with the traditional prioritization process. Additionally, it improves the accuracy and relevance of the user interface usability heuristic testing results. By prioritizing heuristics based on their importance as determined by the Usability Testing Leader—rather than merely depending on the number of items, scale, or heuristics—our approach ensures that evaluations focus on the most critical usability aspects from the start. The findings from this study highlight the importance of expert-driven evaluations for gaining a thorough understanding of heuristic UI assessment, offering a wider perspective than user-perception-based methods like the questionnaire approach. Our research contributes to advancing UI evaluation methodologies, offering an organized and effective framework for future usability testing endeavors.","PeriodicalId":505155,"journal":{"name":"Big Data and Cognitive Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data and Cognitive Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/bdcc8060069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study explores the implementation of the analytic hierarchy process in usability evaluations, specifically focusing on user interface assessment during software development phases. Addressing the challenge of diverse and unstandardized evaluation methodologies, our research develops and applies a tailored algorithm that simplifies heuristic prioritization. This novel method combines the analytic hierarchy process framework with a bespoke algorithm that leverages transitive properties for efficient pairwise comparisons, significantly reducing the evaluative workload. The algorithm is designed to facilitate the estimation of heuristic relevance regardless of the number of items per heuristic or the item scale, thereby streamlining the evaluation process. Rigorous simulation testing of this tailored algorithm is complemented by its empirical application, where seven usability experts evaluate a web interface. This practical implementation demonstrates our method’s ability to decrease the necessary comparisons and simplify the complexity and workload associated with the traditional prioritization process. Additionally, it improves the accuracy and relevance of the user interface usability heuristic testing results. By prioritizing heuristics based on their importance as determined by the Usability Testing Leader—rather than merely depending on the number of items, scale, or heuristics—our approach ensures that evaluations focus on the most critical usability aspects from the start. The findings from this study highlight the importance of expert-driven evaluations for gaining a thorough understanding of heuristic UI assessment, offering a wider perspective than user-perception-based methods like the questionnaire approach. Our research contributes to advancing UI evaluation methodologies, offering an organized and effective framework for future usability testing endeavors.