{"title":"Computer-aided diagnosis of early-stage Retinopathy of Prematurity in neonatal fundus images using artificial intelligence.","authors":"V M Raja Sankari, Snekhalatha Umapathy","doi":"10.1088/2057-1976/ad91ba","DOIUrl":null,"url":null,"abstract":"<p><p>Retinopathy of Prematurity (ROP) is a retinal disorder affecting preterm babies, which can lead to permanent blindness without treatment. Early-stage ROP diagnosis is vital in providing optimal therapy for the neonates. The proposed study predicts early-stage ROP from neonatal fundus images using Machine Learning (ML) classifiers and Convolutional Neural Networks (CNN) based pre-trained networks. The characteristic demarcation lines and ridges in early stage ROP are segmented utilising a novel Swin U-Net. 2000 Scale Invariant Feature Transform (SIFT) descriptors were extracted from the segmented ridges and are dimensionally reduced to 50 features using Principal Component Analysis (PCA). Seven ROP-specific features, including six Gray Level Co-occurrence Matrix (GLCM) and ridge length features, are extracted from the segmented image and are fused with the PCA reduced 50 SIFT features. Finally, three ML classifiers, such as Support Vector Machine (SVM), Random Forest (RF), and<i>k</i>- Nearest Neighbor (<i>k</i>-NN), are used to classify the 50 features to predict the early-stage ROP from Normal images. On the other hand, the raw retinal images are classified directly into normal and early-stage ROP using six pre-trained classifiers, namely ResNet50, ShuffleNet V2, EfficientNet, MobileNet, VGG16, and DarkNet19. It is seen that the ResNet50 network outperformed all other networks in predicting early-stage ROP with 89.5% accuracy, 87.5% sensitivity, 91.5% specificity, 91.1% precision, 88% NPV and an Area Under the Curve (AUC) of 0.92. Swin U-Net Convolutional Neural Networks (CNN) segmented the ridges and demarcation lines with an accuracy of 89.7% with 80.5% precision, 92.6% recall, 75.76% IoU, and 0.86 as the Dice coefficient. The SVM classifier using the 57 features from the segmented images achieved a classification accuracy of 88.75%, sensitivity of 90%, specificity of 87.5%, and an AUC of 0.91. The system can be utilised as a point-of-care diagnostic tool for ROP diagnosis of neonates in remote areas.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Physics & Engineering Express","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2057-1976/ad91ba","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Retinopathy of Prematurity (ROP) is a retinal disorder affecting preterm babies, which can lead to permanent blindness without treatment. Early-stage ROP diagnosis is vital in providing optimal therapy for the neonates. The proposed study predicts early-stage ROP from neonatal fundus images using Machine Learning (ML) classifiers and Convolutional Neural Networks (CNN) based pre-trained networks. The characteristic demarcation lines and ridges in early stage ROP are segmented utilising a novel Swin U-Net. 2000 Scale Invariant Feature Transform (SIFT) descriptors were extracted from the segmented ridges and are dimensionally reduced to 50 features using Principal Component Analysis (PCA). Seven ROP-specific features, including six Gray Level Co-occurrence Matrix (GLCM) and ridge length features, are extracted from the segmented image and are fused with the PCA reduced 50 SIFT features. Finally, three ML classifiers, such as Support Vector Machine (SVM), Random Forest (RF), andk- Nearest Neighbor (k-NN), are used to classify the 50 features to predict the early-stage ROP from Normal images. On the other hand, the raw retinal images are classified directly into normal and early-stage ROP using six pre-trained classifiers, namely ResNet50, ShuffleNet V2, EfficientNet, MobileNet, VGG16, and DarkNet19. It is seen that the ResNet50 network outperformed all other networks in predicting early-stage ROP with 89.5% accuracy, 87.5% sensitivity, 91.5% specificity, 91.1% precision, 88% NPV and an Area Under the Curve (AUC) of 0.92. Swin U-Net Convolutional Neural Networks (CNN) segmented the ridges and demarcation lines with an accuracy of 89.7% with 80.5% precision, 92.6% recall, 75.76% IoU, and 0.86 as the Dice coefficient. The SVM classifier using the 57 features from the segmented images achieved a classification accuracy of 88.75%, sensitivity of 90%, specificity of 87.5%, and an AUC of 0.91. The system can be utilised as a point-of-care diagnostic tool for ROP diagnosis of neonates in remote areas.
期刊介绍:
BPEX is an inclusive, international, multidisciplinary journal devoted to publishing new research on any application of physics and/or engineering in medicine and/or biology. Characterized by a broad geographical coverage and a fast-track peer-review process, relevant topics include all aspects of biophysics, medical physics and biomedical engineering. Papers that are almost entirely clinical or biological in their focus are not suitable. The journal has an emphasis on publishing interdisciplinary work and bringing research fields together, encompassing experimental, theoretical and computational work.