Liesbeth Hunik, Asma Chaabouni, Twan van Laarhoven, Tim C Olde Hartman, Ralph T H Leijenaar, Jochen W L Cals, Annemarie A Uijen, Henk J Schers
{"title":"Diagnostic Prediction Models for Primary Care, Based on AI and Electronic Health Records: Systematic Review.","authors":"Liesbeth Hunik, Asma Chaabouni, Twan van Laarhoven, Tim C Olde Hartman, Ralph T H Leijenaar, Jochen W L Cals, Annemarie A Uijen, Henk J Schers","doi":"10.2196/62862","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI)-based diagnostic prediction models could aid primary care (PC) in decision-making for faster and more accurate diagnoses. AI has the potential to transform electronic health records (EHRs) data into valuable diagnostic prediction models. Different prediction models based on EHR have been developed. However, there are currently no systematic reviews that evaluate AI-based diagnostic prediction models for PC using EHR data.</p><p><strong>Objective: </strong>This study aims to evaluate the content of diagnostic prediction models based on AI and EHRs in PC, including risk of bias and applicability.</p><p><strong>Methods: </strong>This systematic review was performed according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, Embase, Web of Science, and Cochrane were searched. We included observational and intervention studies using AI and PC EHRs and developing or testing a diagnostic prediction model for health conditions. Two independent reviewers (LH and AC) used a standardized data extraction form. Risk of bias and applicability were assessed using PROBAST (Prediction Model Risk of Bias Assessment Tool).</p><p><strong>Results: </strong>From 10,657 retrieved records, a total of 15 papers were selected. Most EHR papers focused on 1 chronic health care condition (n=11, 73%). From the 15 papers, 13 (87%) described a study that developed a diagnostic prediction model and 2 (13%) described a study that externally validated and tested the model in a PC setting. Studies used a variety of AI techniques. The predictors used to develop the model were all registered in the EHR. We found no papers with a low risk of bias, and high risk of bias was found in 9 (60%) papers. Biases covered an unjustified small sample size, not excluding predictors from the outcome definition, and the inappropriate evaluation of the performance measures. The risk of bias was unclear in 6 papers, as no information was provided on the handling of missing data and no results were reported from the multivariate analysis. Applicability was unclear in 10 (67%) papers, mainly due to lack of clarity in reporting the time interval between outcomes and predictors.</p><p><strong>Conclusions: </strong>Most AI-based diagnostic prediction models based on EHR data in PC focused on 1 chronic condition. Only 2 papers tested the model in a PC setting. The lack of sufficiently described methods led to a high risk of bias. Our findings highlight that the currently available diagnostic prediction models are not yet ready for clinical implementation in PC.</p>","PeriodicalId":56334,"journal":{"name":"JMIR Medical Informatics","volume":"13 ","pages":"e62862"},"PeriodicalIF":3.8000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373303/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/62862","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Artificial intelligence (AI)-based diagnostic prediction models could aid primary care (PC) in decision-making for faster and more accurate diagnoses. AI has the potential to transform electronic health records (EHRs) data into valuable diagnostic prediction models. Different prediction models based on EHR have been developed. However, there are currently no systematic reviews that evaluate AI-based diagnostic prediction models for PC using EHR data.
Objective: This study aims to evaluate the content of diagnostic prediction models based on AI and EHRs in PC, including risk of bias and applicability.
Methods: This systematic review was performed according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, Embase, Web of Science, and Cochrane were searched. We included observational and intervention studies using AI and PC EHRs and developing or testing a diagnostic prediction model for health conditions. Two independent reviewers (LH and AC) used a standardized data extraction form. Risk of bias and applicability were assessed using PROBAST (Prediction Model Risk of Bias Assessment Tool).
Results: From 10,657 retrieved records, a total of 15 papers were selected. Most EHR papers focused on 1 chronic health care condition (n=11, 73%). From the 15 papers, 13 (87%) described a study that developed a diagnostic prediction model and 2 (13%) described a study that externally validated and tested the model in a PC setting. Studies used a variety of AI techniques. The predictors used to develop the model were all registered in the EHR. We found no papers with a low risk of bias, and high risk of bias was found in 9 (60%) papers. Biases covered an unjustified small sample size, not excluding predictors from the outcome definition, and the inappropriate evaluation of the performance measures. The risk of bias was unclear in 6 papers, as no information was provided on the handling of missing data and no results were reported from the multivariate analysis. Applicability was unclear in 10 (67%) papers, mainly due to lack of clarity in reporting the time interval between outcomes and predictors.
Conclusions: Most AI-based diagnostic prediction models based on EHR data in PC focused on 1 chronic condition. Only 2 papers tested the model in a PC setting. The lack of sufficiently described methods led to a high risk of bias. Our findings highlight that the currently available diagnostic prediction models are not yet ready for clinical implementation in PC.
背景:基于人工智能(AI)的诊断预测模型可以帮助初级保健(PC)做出更快、更准确的诊断决策。人工智能有潜力将电子健康记录(EHRs)数据转化为有价值的诊断预测模型。各种基于电子病历的预测模型已经被开发出来。然而,目前还没有系统的综述来评估使用电子病历数据的基于人工智能的PC诊断预测模型。目的:本研究旨在评价基于人工智能和电子病历的PC诊断预测模型的内容,包括偏倚风险和适用性。方法:本系统评价按照PRISMA(系统评价和荟萃分析首选报告项目)指南进行。检索了MEDLINE、Embase、Web of Science和Cochrane。我们纳入了使用人工智能和PC电子病历的观察性和干预性研究,并开发或测试了健康状况的诊断预测模型。两个独立的审稿人(LH和AC)使用了标准化的数据提取表。使用PROBAST(预测模型偏倚风险评估工具)评估偏倚风险和适用性。结果:从10657篇检索记录中,共筛选出15篇论文。大多数电子病历论文集中于1种慢性健康状况(n= 11,73%)。在这15篇论文中,13篇(87%)描述了开发诊断预测模型的研究,2篇(13%)描述了在PC环境下对模型进行外部验证和测试的研究。研究使用了各种人工智能技术。用于开发模型的预测因子均在电子病历中登记。我们没有发现低偏倚风险的论文,有9篇(60%)论文存在高偏倚风险。偏差包括不合理的小样本量,不排除结果定义中的预测因子,以及对绩效指标的不适当评估。6篇论文的偏倚风险不明确,因为没有提供关于缺失数据处理的信息,也没有多变量分析的结果报告。在10篇(67%)论文中适用性不明确,主要是由于报告结果和预测因子之间的时间间隔缺乏明确性。结论:基于EHR数据的人工智能诊断预测模型大多集中于1种慢性疾病。只有两篇论文在PC环境下测试了该模型。缺乏充分描述的方法导致高偏倚风险。我们的研究结果强调,目前可用的诊断预测模型尚未准备好在PC的临床实施。
期刊介绍:
JMIR Medical Informatics (JMI, ISSN 2291-9694) is a top-rated, tier A journal which focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals.
Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2016: 5.175), JMIR Med Inform has a slightly different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.