Clinician's Artificial Intelligence Checklist and Evaluation Questionnaire: Tools for Oncologists to Assess Artificial Intelligence and Machine Learning Models.
Nadia S Siddiqui, Yazan Bouchi, Syed Jawad Hussain Shah, Saeed Alqarni, Suraj Sood, Yugyung Lee, John Park, John Kang
{"title":"Clinician's Artificial Intelligence Checklist and Evaluation Questionnaire: Tools for Oncologists to Assess Artificial Intelligence and Machine Learning Models.","authors":"Nadia S Siddiqui, Yazan Bouchi, Syed Jawad Hussain Shah, Saeed Alqarni, Suraj Sood, Yugyung Lee, John Park, John Kang","doi":"10.1200/CCI-25-00067","DOIUrl":null,"url":null,"abstract":"<p><p>Advancements in oncology are accelerating in the fields of artificial intelligence (AI) and machine learning. The complexity and multidisciplinary nature of oncology necessitate a cautious approach to evaluating AI models. The surge in development of AI tools highlights a need for organized evaluation methods. Currently, widely accepted guidelines are aimed at developers and do not provide necessary technical background for clinicians. Additionally, published guides introducing clinicians to AI in medicine often lack user-friendly evaluation tools or lack specificity to oncology. This paper provides background on model development and proposes a yes/no checklist and questionnaire designed to help oncologists effectively assess AI models. The yes/no checklist is intended to be used as a more efficient scan of whether the model conforms to published best standards. The open-ended questionnaire is intended for a more in-depth survey. The checklist and the questionnaire were developed by clinical and AI researchers. Initial discussions identified broad domains, gradually narrowing to model development points relevant to clinical practice. The development process included two literature searches to align with current best practices. Insights from 24 articles were integrated to refine the questionnaire and the checklist. The developed tools are intended for use by clinicians in the field of oncology looking to evaluate AI models. Cases of four AI applications in oncology are analyzed, demonstrating utility in real-world scenarios and enhancing case-based learning for clinicians. These tools highlight the interdisciplinary nature of effective AI integration in oncology.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"9 ","pages":"e2500067"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JCO Clinical Cancer Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1200/CCI-25-00067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/17 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Advancements in oncology are accelerating in the fields of artificial intelligence (AI) and machine learning. The complexity and multidisciplinary nature of oncology necessitate a cautious approach to evaluating AI models. The surge in development of AI tools highlights a need for organized evaluation methods. Currently, widely accepted guidelines are aimed at developers and do not provide necessary technical background for clinicians. Additionally, published guides introducing clinicians to AI in medicine often lack user-friendly evaluation tools or lack specificity to oncology. This paper provides background on model development and proposes a yes/no checklist and questionnaire designed to help oncologists effectively assess AI models. The yes/no checklist is intended to be used as a more efficient scan of whether the model conforms to published best standards. The open-ended questionnaire is intended for a more in-depth survey. The checklist and the questionnaire were developed by clinical and AI researchers. Initial discussions identified broad domains, gradually narrowing to model development points relevant to clinical practice. The development process included two literature searches to align with current best practices. Insights from 24 articles were integrated to refine the questionnaire and the checklist. The developed tools are intended for use by clinicians in the field of oncology looking to evaluate AI models. Cases of four AI applications in oncology are analyzed, demonstrating utility in real-world scenarios and enhancing case-based learning for clinicians. These tools highlight the interdisciplinary nature of effective AI integration in oncology.