Youyu Sheng , Yaoqin Gu , Jianqin Cao , Yuhan Liu , Xiaoyu Wang , Jiani Chen , Xianghong Sun , Jingyu Zhang
{"title":"测量智能系统的可理解性:跨三个领域的规模开发和验证","authors":"Youyu Sheng , Yaoqin Gu , Jianqin Cao , Yuhan Liu , Xiaoyu Wang , Jiani Chen , Xianghong Sun , Jingyu Zhang","doi":"10.1016/j.ijhcs.2025.103592","DOIUrl":null,"url":null,"abstract":"<div><div>While modern intelligent systems using black-box algorithms have proved their usefulness in many areas, whether the systems’ decisions and intentions can be fully understood by human users is still a critical question. However, the measurement of system understandability is lacking, and it undermines the development of this direction. To fill in such a gap, we conducted three studies to construct a scale to measure the understandability in three intelligent systems. In Study 1, we developed the original scale items through document analysis and expert interviews. In Study 2, we exposed 307 participants to autonomous vehicle systems which provided different amounts of information in simulated takeover scenarios. The participants’ responses towards these systems were collected using the developed scale. Exploratory factors analysis found 4 factors (Explanation Comprehensiveness, Trustworthiness Calibration, Cognitive Accessibility, and Explanation Necessity), and they had significant correlation with important attitudinal behavioral outcomes including trust, usage intention, and satisfaction. In Study 3, we further validated the structural and criterion-related validity of the scale using a new sample of 347 participants interacting with medical and financial decision support systems. The results indicate that the developed scale is a reliable and effective tool for assessing understandability in different intelligent systems, with potential to significantly enhance the design of intelligent systems to be more user-friendly and comprehensible.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"203 ","pages":"Article 103592"},"PeriodicalIF":5.1000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Measuring understandability of intelligent systems: Scale development and validation across three domains\",\"authors\":\"Youyu Sheng , Yaoqin Gu , Jianqin Cao , Yuhan Liu , Xiaoyu Wang , Jiani Chen , Xianghong Sun , Jingyu Zhang\",\"doi\":\"10.1016/j.ijhcs.2025.103592\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>While modern intelligent systems using black-box algorithms have proved their usefulness in many areas, whether the systems’ decisions and intentions can be fully understood by human users is still a critical question. However, the measurement of system understandability is lacking, and it undermines the development of this direction. To fill in such a gap, we conducted three studies to construct a scale to measure the understandability in three intelligent systems. In Study 1, we developed the original scale items through document analysis and expert interviews. In Study 2, we exposed 307 participants to autonomous vehicle systems which provided different amounts of information in simulated takeover scenarios. The participants’ responses towards these systems were collected using the developed scale. Exploratory factors analysis found 4 factors (Explanation Comprehensiveness, Trustworthiness Calibration, Cognitive Accessibility, and Explanation Necessity), and they had significant correlation with important attitudinal behavioral outcomes including trust, usage intention, and satisfaction. In Study 3, we further validated the structural and criterion-related validity of the scale using a new sample of 347 participants interacting with medical and financial decision support systems. The results indicate that the developed scale is a reliable and effective tool for assessing understandability in different intelligent systems, with potential to significantly enhance the design of intelligent systems to be more user-friendly and comprehensible.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"203 \",\"pages\":\"Article 103592\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581925001491\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925001491","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Measuring understandability of intelligent systems: Scale development and validation across three domains
While modern intelligent systems using black-box algorithms have proved their usefulness in many areas, whether the systems’ decisions and intentions can be fully understood by human users is still a critical question. However, the measurement of system understandability is lacking, and it undermines the development of this direction. To fill in such a gap, we conducted three studies to construct a scale to measure the understandability in three intelligent systems. In Study 1, we developed the original scale items through document analysis and expert interviews. In Study 2, we exposed 307 participants to autonomous vehicle systems which provided different amounts of information in simulated takeover scenarios. The participants’ responses towards these systems were collected using the developed scale. Exploratory factors analysis found 4 factors (Explanation Comprehensiveness, Trustworthiness Calibration, Cognitive Accessibility, and Explanation Necessity), and they had significant correlation with important attitudinal behavioral outcomes including trust, usage intention, and satisfaction. In Study 3, we further validated the structural and criterion-related validity of the scale using a new sample of 347 participants interacting with medical and financial decision support systems. The results indicate that the developed scale is a reliable and effective tool for assessing understandability in different intelligent systems, with potential to significantly enhance the design of intelligent systems to be more user-friendly and comprehensible.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...