{"title":"在U-Net中深入研究视网膜血管分割的迁移学习:一个广泛的超参数分析","authors":"G. Prethija , Jeevaa Katiravan","doi":"10.1016/j.pdpdt.2025.104620","DOIUrl":null,"url":null,"abstract":"<div><div>Blood vessel segmentation poses numerous challenges. Firstly, blood vessels often lack sufficient contrast against the background, impeding accurate differentiation. Additionally, the overlapping nature of blood vessels complicates separating individual vessels. Moreover, variations in the thickness of vessels and branching structures further augment complications to the segmentation process. These hurdles demand robust algorithms and techniques for effective blood vessel segmentation in medical imaging applications. The U-Net and its alternates have demonstrated exceptional performance related to conventional traditional Convolutional Neural Network (CNN). This study proposes a novel approach for retinal vessel segmentation through transfer learning. We proposed models such as VGG16 U-Net, VGG19 U-Net, ResNet50 U-Net, MobileNetV2 U-Net and DenseNet121 U-Net that employ pretrained models as encoders in U-Net architecture. We investigated the performance of pretrained models on DRIVE datasets with the optimizers Adam, Stochastic Gradient Descent <strong>(</strong>SGD) and RMSProp. The results revealed that models with Adam optimizer have shown better results. The evaluated results demonstrated that ResNet50 U-Net achieved the highest specificity of 0.9875, MobileNetV2 U-Net achieved a recall of 0.8056 and DenseNet121 U-Net attained an accuracy of 0.9689. VGG16 U-Net and MobileNetV2 U-Net have attained a dice coefficient of 0.849.</div></div>","PeriodicalId":20141,"journal":{"name":"Photodiagnosis and Photodynamic Therapy","volume":"53 ","pages":"Article 104620"},"PeriodicalIF":3.1000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Delving into transfer learning within U-Net for refined retinal vessel segmentation: An extensive hyperparameter analysis\",\"authors\":\"G. Prethija , Jeevaa Katiravan\",\"doi\":\"10.1016/j.pdpdt.2025.104620\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Blood vessel segmentation poses numerous challenges. Firstly, blood vessels often lack sufficient contrast against the background, impeding accurate differentiation. Additionally, the overlapping nature of blood vessels complicates separating individual vessels. Moreover, variations in the thickness of vessels and branching structures further augment complications to the segmentation process. These hurdles demand robust algorithms and techniques for effective blood vessel segmentation in medical imaging applications. The U-Net and its alternates have demonstrated exceptional performance related to conventional traditional Convolutional Neural Network (CNN). This study proposes a novel approach for retinal vessel segmentation through transfer learning. We proposed models such as VGG16 U-Net, VGG19 U-Net, ResNet50 U-Net, MobileNetV2 U-Net and DenseNet121 U-Net that employ pretrained models as encoders in U-Net architecture. We investigated the performance of pretrained models on DRIVE datasets with the optimizers Adam, Stochastic Gradient Descent <strong>(</strong>SGD) and RMSProp. The results revealed that models with Adam optimizer have shown better results. The evaluated results demonstrated that ResNet50 U-Net achieved the highest specificity of 0.9875, MobileNetV2 U-Net achieved a recall of 0.8056 and DenseNet121 U-Net attained an accuracy of 0.9689. VGG16 U-Net and MobileNetV2 U-Net have attained a dice coefficient of 0.849.</div></div>\",\"PeriodicalId\":20141,\"journal\":{\"name\":\"Photodiagnosis and Photodynamic Therapy\",\"volume\":\"53 \",\"pages\":\"Article 104620\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photodiagnosis and Photodynamic Therapy\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1572100025001528\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photodiagnosis and Photodynamic Therapy","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1572100025001528","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
Delving into transfer learning within U-Net for refined retinal vessel segmentation: An extensive hyperparameter analysis
Blood vessel segmentation poses numerous challenges. Firstly, blood vessels often lack sufficient contrast against the background, impeding accurate differentiation. Additionally, the overlapping nature of blood vessels complicates separating individual vessels. Moreover, variations in the thickness of vessels and branching structures further augment complications to the segmentation process. These hurdles demand robust algorithms and techniques for effective blood vessel segmentation in medical imaging applications. The U-Net and its alternates have demonstrated exceptional performance related to conventional traditional Convolutional Neural Network (CNN). This study proposes a novel approach for retinal vessel segmentation through transfer learning. We proposed models such as VGG16 U-Net, VGG19 U-Net, ResNet50 U-Net, MobileNetV2 U-Net and DenseNet121 U-Net that employ pretrained models as encoders in U-Net architecture. We investigated the performance of pretrained models on DRIVE datasets with the optimizers Adam, Stochastic Gradient Descent (SGD) and RMSProp. The results revealed that models with Adam optimizer have shown better results. The evaluated results demonstrated that ResNet50 U-Net achieved the highest specificity of 0.9875, MobileNetV2 U-Net achieved a recall of 0.8056 and DenseNet121 U-Net attained an accuracy of 0.9689. VGG16 U-Net and MobileNetV2 U-Net have attained a dice coefficient of 0.849.
期刊介绍:
Photodiagnosis and Photodynamic Therapy is an international journal for the dissemination of scientific knowledge and clinical developments of Photodiagnosis and Photodynamic Therapy in all medical specialties. The journal publishes original articles, review articles, case presentations, "how-to-do-it" articles, Letters to the Editor, short communications and relevant images with short descriptions. All submitted material is subject to a strict peer-review process.