预测e-ROP研究中需要治疗的早产儿视网膜病变的机器学习模型。

IF 2.6 3区 医学 Q2 OPHTHALMOLOGY
Dinglun He, Xinwei Luo, Bowen Ying, Graham E Quinn, Agnieshka Baumritter, Yong Chen, Gui-Shuang Ying, Lifang He
{"title":"预测e-ROP研究中需要治疗的早产儿视网膜病变的机器学习模型。","authors":"Dinglun He, Xinwei Luo, Bowen Ying, Graham E Quinn, Agnieshka Baumritter, Yong Chen, Gui-Shuang Ying, Lifang He","doi":"10.1167/tvst.14.8.14","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate machine learning (ML) models for predicting treatment-requiring retinopathy of prematurity (TR-ROP) using image findings at 32 to 34 weeks of postmenstrual age, along with demographic and clinical characteristics.</p><p><strong>Methods: </strong>This secondary analysis included 771 infants with a birth weight of less than 1251 g who had at least one imaging session by 34 weeks postmenstrual age and at least one subsequent ROP examination for determining TR-ROP by ophthalmologists in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study. Six ML models (K-nearest neighbors, support vector machine, random forest, extreme gradient boosting, deep neural network [DNN], and transformer) were evaluated for predicting TR-ROP. Prediction performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>Using image findings and demographic and clinical data, ML models achieved AUCs ranging from 0.777 (K-nearest neighbors) to 0.853 (DNN), sensitivity ranging from 0.765 (extreme gradient boosting) to 0.929 (DNN), and specificity ranging from 0.644 (DNN) to 0.698 (transformer). Using image findings alone, the DNN performed best with an AUC of 0.787, sensitivity of 0.729, and specificity of 0.725.</p><p><strong>Conclusions: </strong>ML models using image findings, demographics and clinical characteristics moderately predict TR-ROP, with DNN achieving the highest AUC and sensitivity. Although ML models may provide tools for the early identification of high-risk infants for close monitoring and timely treatment of TR-ROP, future research is needed to improve their performance.</p><p><strong>Translational relevance: </strong>ML has the potential to predict TR-ROP risk based on early image findings, demographics, and clinical characteristics.</p>","PeriodicalId":23322,"journal":{"name":"Translational Vision Science & Technology","volume":"14 8","pages":"14"},"PeriodicalIF":2.6000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352519/pdf/","citationCount":"0","resultStr":"{\"title\":\"Machine Learning Models for Predicting Treatment-Requiring Retinopathy of Prematurity in the e-ROP Study.\",\"authors\":\"Dinglun He, Xinwei Luo, Bowen Ying, Graham E Quinn, Agnieshka Baumritter, Yong Chen, Gui-Shuang Ying, Lifang He\",\"doi\":\"10.1167/tvst.14.8.14\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To evaluate machine learning (ML) models for predicting treatment-requiring retinopathy of prematurity (TR-ROP) using image findings at 32 to 34 weeks of postmenstrual age, along with demographic and clinical characteristics.</p><p><strong>Methods: </strong>This secondary analysis included 771 infants with a birth weight of less than 1251 g who had at least one imaging session by 34 weeks postmenstrual age and at least one subsequent ROP examination for determining TR-ROP by ophthalmologists in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study. Six ML models (K-nearest neighbors, support vector machine, random forest, extreme gradient boosting, deep neural network [DNN], and transformer) were evaluated for predicting TR-ROP. Prediction performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>Using image findings and demographic and clinical data, ML models achieved AUCs ranging from 0.777 (K-nearest neighbors) to 0.853 (DNN), sensitivity ranging from 0.765 (extreme gradient boosting) to 0.929 (DNN), and specificity ranging from 0.644 (DNN) to 0.698 (transformer). Using image findings alone, the DNN performed best with an AUC of 0.787, sensitivity of 0.729, and specificity of 0.725.</p><p><strong>Conclusions: </strong>ML models using image findings, demographics and clinical characteristics moderately predict TR-ROP, with DNN achieving the highest AUC and sensitivity. Although ML models may provide tools for the early identification of high-risk infants for close monitoring and timely treatment of TR-ROP, future research is needed to improve their performance.</p><p><strong>Translational relevance: </strong>ML has the potential to predict TR-ROP risk based on early image findings, demographics, and clinical characteristics.</p>\",\"PeriodicalId\":23322,\"journal\":{\"name\":\"Translational Vision Science & Technology\",\"volume\":\"14 8\",\"pages\":\"14\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352519/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational Vision Science & Technology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/tvst.14.8.14\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational Vision Science & Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/tvst.14.8.14","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:评估机器学习(ML)模型用于预测需要治疗的早产儿视网膜病变(TR-ROP),使用经后32至34周的图像结果,以及人口统计学和临床特征。方法:在评估早产儿急性期视网膜病变(e-ROP)的远程医疗方法研究中,对771名出生体重小于1251 g的婴儿进行了二级分析,这些婴儿在月经后34周至少进行了一次影像学检查,并由眼科医生进行了至少一次ROP检查,以确定TR-ROP。评估了六种ML模型(k近邻、支持向量机、随机森林、极端梯度增强、深度神经网络[DNN]和变压器)预测TR-ROP的能力。使用受试者工作特征曲线下面积(AUC)、准确性、敏感性和特异性评估预测效果。结果:利用图像发现和人口统计学和临床数据,ML模型的auc范围从0.777 (K-nearest neighbors)到0.853 (DNN),灵敏度范围从0.765 (extreme gradient boosting)到0.929 (DNN),特异性范围从0.644 (DNN)到0.698 (transformer)。仅使用图像结果,DNN表现最佳,AUC为0.787,灵敏度为0.729,特异性为0.725。结论:利用图像表现、人口统计学和临床特征的ML模型可以适度预测TR-ROP,其中DNN达到最高的AUC和灵敏度。虽然ML模型可以为早期识别高危婴儿提供工具,以便对TR-ROP进行密切监测和及时治疗,但还需要进一步的研究来提高其性能。翻译相关性:基于早期图像发现、人口统计学和临床特征,ML具有预测TR-ROP风险的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Machine Learning Models for Predicting Treatment-Requiring Retinopathy of Prematurity in the e-ROP Study.

Purpose: To evaluate machine learning (ML) models for predicting treatment-requiring retinopathy of prematurity (TR-ROP) using image findings at 32 to 34 weeks of postmenstrual age, along with demographic and clinical characteristics.

Methods: This secondary analysis included 771 infants with a birth weight of less than 1251 g who had at least one imaging session by 34 weeks postmenstrual age and at least one subsequent ROP examination for determining TR-ROP by ophthalmologists in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study. Six ML models (K-nearest neighbors, support vector machine, random forest, extreme gradient boosting, deep neural network [DNN], and transformer) were evaluated for predicting TR-ROP. Prediction performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.

Results: Using image findings and demographic and clinical data, ML models achieved AUCs ranging from 0.777 (K-nearest neighbors) to 0.853 (DNN), sensitivity ranging from 0.765 (extreme gradient boosting) to 0.929 (DNN), and specificity ranging from 0.644 (DNN) to 0.698 (transformer). Using image findings alone, the DNN performed best with an AUC of 0.787, sensitivity of 0.729, and specificity of 0.725.

Conclusions: ML models using image findings, demographics and clinical characteristics moderately predict TR-ROP, with DNN achieving the highest AUC and sensitivity. Although ML models may provide tools for the early identification of high-risk infants for close monitoring and timely treatment of TR-ROP, future research is needed to improve their performance.

Translational relevance: ML has the potential to predict TR-ROP risk based on early image findings, demographics, and clinical characteristics.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Translational Vision Science & Technology
Translational Vision Science & Technology Engineering-Biomedical Engineering
CiteScore
5.70
自引率
3.30%
发文量
346
审稿时长
25 weeks
期刊介绍: Translational Vision Science & Technology (TVST), an official journal of the Association for Research in Vision and Ophthalmology (ARVO), an international organization whose purpose is to advance research worldwide into understanding the visual system and preventing, treating and curing its disorders, is an online, open access, peer-reviewed journal emphasizing multidisciplinary research that bridges the gap between basic research and clinical care. A highly qualified and diverse group of Associate Editors and Editorial Board Members is led by Editor-in-Chief Marco Zarbin, MD, PhD, FARVO. The journal covers a broad spectrum of work, including but not limited to: Applications of stem cell technology for regenerative medicine, Development of new animal models of human diseases, Tissue bioengineering, Chemical engineering to improve virus-based gene delivery, Nanotechnology for drug delivery, Design and synthesis of artificial extracellular matrices, Development of a true microsurgical operating environment, Refining data analysis algorithms to improve in vivo imaging technology, Results of Phase 1 clinical trials, Reverse translational ("bedside to bench") research. TVST seeks manuscripts from scientists and clinicians with diverse backgrounds ranging from basic chemistry to ophthalmic surgery that will advance or change the way we understand and/or treat vision-threatening diseases. TVST encourages the use of color, multimedia, hyperlinks, program code and other digital enhancements.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信