Dinglun He, Xinwei Luo, Bowen Ying, Graham E Quinn, Agnieshka Baumritter, Yong Chen, Gui-Shuang Ying, Lifang He
{"title":"预测e-ROP研究中需要治疗的早产儿视网膜病变的机器学习模型。","authors":"Dinglun He, Xinwei Luo, Bowen Ying, Graham E Quinn, Agnieshka Baumritter, Yong Chen, Gui-Shuang Ying, Lifang He","doi":"10.1167/tvst.14.8.14","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate machine learning (ML) models for predicting treatment-requiring retinopathy of prematurity (TR-ROP) using image findings at 32 to 34 weeks of postmenstrual age, along with demographic and clinical characteristics.</p><p><strong>Methods: </strong>This secondary analysis included 771 infants with a birth weight of less than 1251 g who had at least one imaging session by 34 weeks postmenstrual age and at least one subsequent ROP examination for determining TR-ROP by ophthalmologists in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study. Six ML models (K-nearest neighbors, support vector machine, random forest, extreme gradient boosting, deep neural network [DNN], and transformer) were evaluated for predicting TR-ROP. Prediction performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>Using image findings and demographic and clinical data, ML models achieved AUCs ranging from 0.777 (K-nearest neighbors) to 0.853 (DNN), sensitivity ranging from 0.765 (extreme gradient boosting) to 0.929 (DNN), and specificity ranging from 0.644 (DNN) to 0.698 (transformer). Using image findings alone, the DNN performed best with an AUC of 0.787, sensitivity of 0.729, and specificity of 0.725.</p><p><strong>Conclusions: </strong>ML models using image findings, demographics and clinical characteristics moderately predict TR-ROP, with DNN achieving the highest AUC and sensitivity. Although ML models may provide tools for the early identification of high-risk infants for close monitoring and timely treatment of TR-ROP, future research is needed to improve their performance.</p><p><strong>Translational relevance: </strong>ML has the potential to predict TR-ROP risk based on early image findings, demographics, and clinical characteristics.</p>","PeriodicalId":23322,"journal":{"name":"Translational Vision Science & Technology","volume":"14 8","pages":"14"},"PeriodicalIF":2.6000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352519/pdf/","citationCount":"0","resultStr":"{\"title\":\"Machine Learning Models for Predicting Treatment-Requiring Retinopathy of Prematurity in the e-ROP Study.\",\"authors\":\"Dinglun He, Xinwei Luo, Bowen Ying, Graham E Quinn, Agnieshka Baumritter, Yong Chen, Gui-Shuang Ying, Lifang He\",\"doi\":\"10.1167/tvst.14.8.14\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To evaluate machine learning (ML) models for predicting treatment-requiring retinopathy of prematurity (TR-ROP) using image findings at 32 to 34 weeks of postmenstrual age, along with demographic and clinical characteristics.</p><p><strong>Methods: </strong>This secondary analysis included 771 infants with a birth weight of less than 1251 g who had at least one imaging session by 34 weeks postmenstrual age and at least one subsequent ROP examination for determining TR-ROP by ophthalmologists in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study. Six ML models (K-nearest neighbors, support vector machine, random forest, extreme gradient boosting, deep neural network [DNN], and transformer) were evaluated for predicting TR-ROP. Prediction performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>Using image findings and demographic and clinical data, ML models achieved AUCs ranging from 0.777 (K-nearest neighbors) to 0.853 (DNN), sensitivity ranging from 0.765 (extreme gradient boosting) to 0.929 (DNN), and specificity ranging from 0.644 (DNN) to 0.698 (transformer). Using image findings alone, the DNN performed best with an AUC of 0.787, sensitivity of 0.729, and specificity of 0.725.</p><p><strong>Conclusions: </strong>ML models using image findings, demographics and clinical characteristics moderately predict TR-ROP, with DNN achieving the highest AUC and sensitivity. Although ML models may provide tools for the early identification of high-risk infants for close monitoring and timely treatment of TR-ROP, future research is needed to improve their performance.</p><p><strong>Translational relevance: </strong>ML has the potential to predict TR-ROP risk based on early image findings, demographics, and clinical characteristics.</p>\",\"PeriodicalId\":23322,\"journal\":{\"name\":\"Translational Vision Science & Technology\",\"volume\":\"14 8\",\"pages\":\"14\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352519/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational Vision Science & Technology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/tvst.14.8.14\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational Vision Science & Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/tvst.14.8.14","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
Machine Learning Models for Predicting Treatment-Requiring Retinopathy of Prematurity in the e-ROP Study.
Purpose: To evaluate machine learning (ML) models for predicting treatment-requiring retinopathy of prematurity (TR-ROP) using image findings at 32 to 34 weeks of postmenstrual age, along with demographic and clinical characteristics.
Methods: This secondary analysis included 771 infants with a birth weight of less than 1251 g who had at least one imaging session by 34 weeks postmenstrual age and at least one subsequent ROP examination for determining TR-ROP by ophthalmologists in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study. Six ML models (K-nearest neighbors, support vector machine, random forest, extreme gradient boosting, deep neural network [DNN], and transformer) were evaluated for predicting TR-ROP. Prediction performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.
Results: Using image findings and demographic and clinical data, ML models achieved AUCs ranging from 0.777 (K-nearest neighbors) to 0.853 (DNN), sensitivity ranging from 0.765 (extreme gradient boosting) to 0.929 (DNN), and specificity ranging from 0.644 (DNN) to 0.698 (transformer). Using image findings alone, the DNN performed best with an AUC of 0.787, sensitivity of 0.729, and specificity of 0.725.
Conclusions: ML models using image findings, demographics and clinical characteristics moderately predict TR-ROP, with DNN achieving the highest AUC and sensitivity. Although ML models may provide tools for the early identification of high-risk infants for close monitoring and timely treatment of TR-ROP, future research is needed to improve their performance.
Translational relevance: ML has the potential to predict TR-ROP risk based on early image findings, demographics, and clinical characteristics.
期刊介绍:
Translational Vision Science & Technology (TVST), an official journal of the Association for Research in Vision and Ophthalmology (ARVO), an international organization whose purpose is to advance research worldwide into understanding the visual system and preventing, treating and curing its disorders, is an online, open access, peer-reviewed journal emphasizing multidisciplinary research that bridges the gap between basic research and clinical care. A highly qualified and diverse group of Associate Editors and Editorial Board Members is led by Editor-in-Chief Marco Zarbin, MD, PhD, FARVO.
The journal covers a broad spectrum of work, including but not limited to:
Applications of stem cell technology for regenerative medicine,
Development of new animal models of human diseases,
Tissue bioengineering,
Chemical engineering to improve virus-based gene delivery,
Nanotechnology for drug delivery,
Design and synthesis of artificial extracellular matrices,
Development of a true microsurgical operating environment,
Refining data analysis algorithms to improve in vivo imaging technology,
Results of Phase 1 clinical trials,
Reverse translational ("bedside to bench") research.
TVST seeks manuscripts from scientists and clinicians with diverse backgrounds ranging from basic chemistry to ophthalmic surgery that will advance or change the way we understand and/or treat vision-threatening diseases. TVST encourages the use of color, multimedia, hyperlinks, program code and other digital enhancements.