Oskar Wysocki , Sammie Mak , Hannah Frost , Donna M. Graham , Dónal Landers , Tariq Aslam
{"title":"Translating the machine; An assessment of clinician understanding of ophthalmological artificial intelligence outputs","authors":"Oskar Wysocki , Sammie Mak , Hannah Frost , Donna M. Graham , Dónal Landers , Tariq Aslam","doi":"10.1016/j.ijmedinf.2025.105958","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>Advances in artificial intelligence offer the promise of automated analysis of optical coherence tomography (OCT) scans to detect ocular complications from anticancer drug therapy. To explore how such AI outputs are interpreted in clinical settings, we conducted a survey-based interview study with 27 clinicians —comprising 10 ophthalmic specialists, 10 ophthalmic practitioners, and 7 oncologists. Participants were first introduced to core AI concepts and realistic clinical scenarios, then asked to assess AI-generated OCT analyses using standardized Likert-scale questions, allowing us to gauge their understanding, trust, and readiness to integrate AI into practice.</div></div><div><h3>Methods</h3><div>We developed a questionnaire through literature review and consultations with ophthalmologists, computer scientists, and AI researchers. A single investigator interviewed 27 clinicians across three specialties and transcribed their responses. Data were summarized as medians (ranges) and compared with Mann–Whitney U tests (α = 0.05).</div></div><div><h3>Results</h3><div>We noted important differences in the impact of various explainability methods on trust, depending on the clinical or AI scenario nature and the staff expertise. Explanations of AI outputs increased trust in the AI algorithm when outputs simply reflected ground truth expert opinion. When clinical scenarios were complex with incorrect AI outcomes, a mixed response to explainability led to correctly reduced trust in experienced clinicians but mixed feedback amongst less experienced clinicians. All clinicians had a general consensus on lack of current knowledge in interacting with AI and desire more training.</div></div><div><h3>Conclusions</h3><div>Clinicians’ trust in AI algorithms are affected by explainability methods and factors, including AI’s performance, personal judgments and clinical experience. The development of clinical AI systems should consider the above and these responses ideally be factored into real-world assessments. Use of this study’s findings could help improve the real world validity of medical AI systems by enhancing the human–computer interactions, with preferred explainability techniques tailored to specific situations.</div></div>","PeriodicalId":54950,"journal":{"name":"International Journal of Medical Informatics","volume":"201 ","pages":"Article 105958"},"PeriodicalIF":3.7000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1386505625001753","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
Advances in artificial intelligence offer the promise of automated analysis of optical coherence tomography (OCT) scans to detect ocular complications from anticancer drug therapy. To explore how such AI outputs are interpreted in clinical settings, we conducted a survey-based interview study with 27 clinicians —comprising 10 ophthalmic specialists, 10 ophthalmic practitioners, and 7 oncologists. Participants were first introduced to core AI concepts and realistic clinical scenarios, then asked to assess AI-generated OCT analyses using standardized Likert-scale questions, allowing us to gauge their understanding, trust, and readiness to integrate AI into practice.
Methods
We developed a questionnaire through literature review and consultations with ophthalmologists, computer scientists, and AI researchers. A single investigator interviewed 27 clinicians across three specialties and transcribed their responses. Data were summarized as medians (ranges) and compared with Mann–Whitney U tests (α = 0.05).
Results
We noted important differences in the impact of various explainability methods on trust, depending on the clinical or AI scenario nature and the staff expertise. Explanations of AI outputs increased trust in the AI algorithm when outputs simply reflected ground truth expert opinion. When clinical scenarios were complex with incorrect AI outcomes, a mixed response to explainability led to correctly reduced trust in experienced clinicians but mixed feedback amongst less experienced clinicians. All clinicians had a general consensus on lack of current knowledge in interacting with AI and desire more training.
Conclusions
Clinicians’ trust in AI algorithms are affected by explainability methods and factors, including AI’s performance, personal judgments and clinical experience. The development of clinical AI systems should consider the above and these responses ideally be factored into real-world assessments. Use of this study’s findings could help improve the real world validity of medical AI systems by enhancing the human–computer interactions, with preferred explainability techniques tailored to specific situations.
期刊介绍:
International Journal of Medical Informatics provides an international medium for dissemination of original results and interpretative reviews concerning the field of medical informatics. The Journal emphasizes the evaluation of systems in healthcare settings.
The scope of journal covers:
Information systems, including national or international registration systems, hospital information systems, departmental and/or physician''s office systems, document handling systems, electronic medical record systems, standardization, systems integration etc.;
Computer-aided medical decision support systems using heuristic, algorithmic and/or statistical methods as exemplified in decision theory, protocol development, artificial intelligence, etc.
Educational computer based programs pertaining to medical informatics or medicine in general;
Organizational, economic, social, clinical impact, ethical and cost-benefit aspects of IT applications in health care.