{"title":"Is usability testing valid with prototypes where clickable hotspots are highlighted upon misclick?","authors":"Matus Krajcovic , Peter Demcak , Eduard Kuric","doi":"10.1016/j.jss.2025.112446","DOIUrl":null,"url":null,"abstract":"<div><div>In user experience design, prototypes are an indispensable tool for early diagnosis of usability issues. Designing usability testing to accommodate a prototype’s limited interactivity is essential to obtain relevant participant feedback. Hotspot Highlighting is a technique employed by all prominent prototyping tools to allow usability testers to see which areas of the prototype are clickable. The current body of knowledge lacks definite answers on how highlighting impacts usability testing results, compared to scenarios where participants complete tasks fully on their own. Can studies be treated the same, regardless of whether Hotspot Highlighting is enabled? What are the recommendations for how and when Hotspot Highlighting can or should be used? To investigate, we conduct a between-subjects experiment with 80 participants and 240 task completions in which we compare user behavior depending on the presence of Hotspot Highlighting. Its results indicate that Hotspot Highlighting can affect participant behavior before and after a highlight is displayed, leading to potentially different usability findings if left unaccounted for. The guidance of highlights changes the targets of clicks and encourages cognitively efficient finding of solutions by intentionally triggering the highlights. Considering the potential of Hotspot Highlighting to facilitate the usability testing of prototypes with limited interactivity, we discuss potential adaptations of the technique that address its current issues for more methodologically sound usability evaluation.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"226 ","pages":"Article 112446"},"PeriodicalIF":3.7000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225001141","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
In user experience design, prototypes are an indispensable tool for early diagnosis of usability issues. Designing usability testing to accommodate a prototype’s limited interactivity is essential to obtain relevant participant feedback. Hotspot Highlighting is a technique employed by all prominent prototyping tools to allow usability testers to see which areas of the prototype are clickable. The current body of knowledge lacks definite answers on how highlighting impacts usability testing results, compared to scenarios where participants complete tasks fully on their own. Can studies be treated the same, regardless of whether Hotspot Highlighting is enabled? What are the recommendations for how and when Hotspot Highlighting can or should be used? To investigate, we conduct a between-subjects experiment with 80 participants and 240 task completions in which we compare user behavior depending on the presence of Hotspot Highlighting. Its results indicate that Hotspot Highlighting can affect participant behavior before and after a highlight is displayed, leading to potentially different usability findings if left unaccounted for. The guidance of highlights changes the targets of clicks and encourages cognitively efficient finding of solutions by intentionally triggering the highlights. Considering the potential of Hotspot Highlighting to facilitate the usability testing of prototypes with limited interactivity, we discuss potential adaptations of the technique that address its current issues for more methodologically sound usability evaluation.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
•Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
•Agile, model-driven, service-oriented, open source and global software development
•Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
•Human factors and management concerns of software development
•Data management and big data issues of software systems
•Metrics and evaluation, data mining of software development resources
•Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.