{"title":"Machine learning systems as mentors in human learning: A user study on machine bias transmission in medical training","authors":"Lucia Vicente , Helena Matute , Caterina Fregosi , Federico Cabitza","doi":"10.1016/j.ijhcs.2025.103474","DOIUrl":null,"url":null,"abstract":"<div><div>While accurate AI systems can enhance human performance, exerting both an augmentation and good mentoring effect, imperfect systems may act as poor mentors, transmitting biases and systematic errors to users. However, there is still limited research on the potential for AI to transmit biases to humans, an effect that could be even more pronounced for less experienced users, such as novices or trainees, making decisions supported by AI-based systems. To investigate the bias transmission effect and the potential of AI to serve as a mentor, we involved eighty-six medical students, dividing them into an AI-assisted group and a control group. We tasked them with classifying simulated tissue samples for a fictitious disease. In the first phase of the task, the AI group received diagnostic advice from a simulated AI system that made systematic errors for a specific type of case, while being accurate for all other types. The control group did not receive any assistance. In the second phase, participants in both groups classified new tissue samples, including ambiguous cases, without any support to test the residual impact of AI bias. The results showed that the AI-assisted group exhibited a higher error rate when classifying cases where the AI provided systematically erroneous advice, both in the AI-assisted and the subsequent unassisted phase, suggesting the persistence of AI-induced bias. Our study emphasizes the need for careful implementation and continuous evaluation of AI systems in education and training to mitigate potential negative impacts on trainee learning outcomes.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"198 ","pages":"Article 103474"},"PeriodicalIF":5.3000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S107158192500031X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
While accurate AI systems can enhance human performance, exerting both an augmentation and good mentoring effect, imperfect systems may act as poor mentors, transmitting biases and systematic errors to users. However, there is still limited research on the potential for AI to transmit biases to humans, an effect that could be even more pronounced for less experienced users, such as novices or trainees, making decisions supported by AI-based systems. To investigate the bias transmission effect and the potential of AI to serve as a mentor, we involved eighty-six medical students, dividing them into an AI-assisted group and a control group. We tasked them with classifying simulated tissue samples for a fictitious disease. In the first phase of the task, the AI group received diagnostic advice from a simulated AI system that made systematic errors for a specific type of case, while being accurate for all other types. The control group did not receive any assistance. In the second phase, participants in both groups classified new tissue samples, including ambiguous cases, without any support to test the residual impact of AI bias. The results showed that the AI-assisted group exhibited a higher error rate when classifying cases where the AI provided systematically erroneous advice, both in the AI-assisted and the subsequent unassisted phase, suggesting the persistence of AI-induced bias. Our study emphasizes the need for careful implementation and continuous evaluation of AI systems in education and training to mitigate potential negative impacts on trainee learning outcomes.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...