Jose Vargas Quiros, Bart Liefers, Karin A van Garderen, Jeroen P Vermeulen, Caroline Klaver
{"title":"VascX模型:从彩色眼底图像进行视网膜血管分析的深度集合。","authors":"Jose Vargas Quiros, Bart Liefers, Karin A van Garderen, Jeroen P Vermeulen, Caroline Klaver","doi":"10.1167/tvst.14.7.19","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To present and validate deep learning model ensembles (VascX) for vessel, artery-vein, optic disc segmentation, and fovea localization for color fundus images (CFIs). VascX preprocessing and inference code and model weights were made publicly available to facilitate research on retinal vasculature.</p><p><strong>Methods: </strong>For model training, we combined over 15 published annotated datasets with CFIs from Dutch studies (mainly the Rotterdam Study). This resulted in diverse development sets with a variety of patient characteristics and imaging conditions. We trained UNet model ensembles using a new, more robust preprocessing algorithm and strong data augmentations. We compared VascX segmentation performance (Dice) to models with publicly available weights: AutoMorph and LittleWNet. We compared the quality of VascX (and previous models') features by measuring agreement (mean absolute error [MAE] and Pearson correlation) with features extracted from grader segmentations.</p><p><strong>Results: </strong>Dice scores revealed better performance from VascX across most datasets evaluated, especially for artery-vein and optic disc segmentation. VascX performed more consistently as the quality of the images decreased and for both disc and fovea-centered images. These improvements translated into higher-quality vascular features. Of 24 features evaluated, 14 showed a significant improvement in MAE when compared to AutoMorph and 23 when compared to LWNet. VascX had the highest correlations with ground-truth features in all but two cases.</p><p><strong>Conclusions: </strong>VascX models perform well across a variety of conditions, likely due to the size and diversity of our development sets. VascX represents an important improvement in segmentation quality that translates into better vascular features to support more robust analyses of the retinal vasculature.</p><p><strong>Translational relevance: </strong>By making VascX public, we aim to facilitate and improve research linking retinal vascular biomarkers to ophthalmic and systemic conditions, relevant for the detection, prevention, and monitoring of disease.</p>","PeriodicalId":23322,"journal":{"name":"Translational Vision Science & Technology","volume":"14 7","pages":"19"},"PeriodicalIF":2.6000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306690/pdf/","citationCount":"0","resultStr":"{\"title\":\"VascX Models: Deep Ensembles for Retinal Vascular Analysis From Color Fundus Images.\",\"authors\":\"Jose Vargas Quiros, Bart Liefers, Karin A van Garderen, Jeroen P Vermeulen, Caroline Klaver\",\"doi\":\"10.1167/tvst.14.7.19\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To present and validate deep learning model ensembles (VascX) for vessel, artery-vein, optic disc segmentation, and fovea localization for color fundus images (CFIs). VascX preprocessing and inference code and model weights were made publicly available to facilitate research on retinal vasculature.</p><p><strong>Methods: </strong>For model training, we combined over 15 published annotated datasets with CFIs from Dutch studies (mainly the Rotterdam Study). This resulted in diverse development sets with a variety of patient characteristics and imaging conditions. We trained UNet model ensembles using a new, more robust preprocessing algorithm and strong data augmentations. We compared VascX segmentation performance (Dice) to models with publicly available weights: AutoMorph and LittleWNet. We compared the quality of VascX (and previous models') features by measuring agreement (mean absolute error [MAE] and Pearson correlation) with features extracted from grader segmentations.</p><p><strong>Results: </strong>Dice scores revealed better performance from VascX across most datasets evaluated, especially for artery-vein and optic disc segmentation. VascX performed more consistently as the quality of the images decreased and for both disc and fovea-centered images. These improvements translated into higher-quality vascular features. Of 24 features evaluated, 14 showed a significant improvement in MAE when compared to AutoMorph and 23 when compared to LWNet. VascX had the highest correlations with ground-truth features in all but two cases.</p><p><strong>Conclusions: </strong>VascX models perform well across a variety of conditions, likely due to the size and diversity of our development sets. VascX represents an important improvement in segmentation quality that translates into better vascular features to support more robust analyses of the retinal vasculature.</p><p><strong>Translational relevance: </strong>By making VascX public, we aim to facilitate and improve research linking retinal vascular biomarkers to ophthalmic and systemic conditions, relevant for the detection, prevention, and monitoring of disease.</p>\",\"PeriodicalId\":23322,\"journal\":{\"name\":\"Translational Vision Science & Technology\",\"volume\":\"14 7\",\"pages\":\"19\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306690/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational Vision Science & Technology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/tvst.14.7.19\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational Vision Science & Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/tvst.14.7.19","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
VascX Models: Deep Ensembles for Retinal Vascular Analysis From Color Fundus Images.
Purpose: To present and validate deep learning model ensembles (VascX) for vessel, artery-vein, optic disc segmentation, and fovea localization for color fundus images (CFIs). VascX preprocessing and inference code and model weights were made publicly available to facilitate research on retinal vasculature.
Methods: For model training, we combined over 15 published annotated datasets with CFIs from Dutch studies (mainly the Rotterdam Study). This resulted in diverse development sets with a variety of patient characteristics and imaging conditions. We trained UNet model ensembles using a new, more robust preprocessing algorithm and strong data augmentations. We compared VascX segmentation performance (Dice) to models with publicly available weights: AutoMorph and LittleWNet. We compared the quality of VascX (and previous models') features by measuring agreement (mean absolute error [MAE] and Pearson correlation) with features extracted from grader segmentations.
Results: Dice scores revealed better performance from VascX across most datasets evaluated, especially for artery-vein and optic disc segmentation. VascX performed more consistently as the quality of the images decreased and for both disc and fovea-centered images. These improvements translated into higher-quality vascular features. Of 24 features evaluated, 14 showed a significant improvement in MAE when compared to AutoMorph and 23 when compared to LWNet. VascX had the highest correlations with ground-truth features in all but two cases.
Conclusions: VascX models perform well across a variety of conditions, likely due to the size and diversity of our development sets. VascX represents an important improvement in segmentation quality that translates into better vascular features to support more robust analyses of the retinal vasculature.
Translational relevance: By making VascX public, we aim to facilitate and improve research linking retinal vascular biomarkers to ophthalmic and systemic conditions, relevant for the detection, prevention, and monitoring of disease.
期刊介绍:
Translational Vision Science & Technology (TVST), an official journal of the Association for Research in Vision and Ophthalmology (ARVO), an international organization whose purpose is to advance research worldwide into understanding the visual system and preventing, treating and curing its disorders, is an online, open access, peer-reviewed journal emphasizing multidisciplinary research that bridges the gap between basic research and clinical care. A highly qualified and diverse group of Associate Editors and Editorial Board Members is led by Editor-in-Chief Marco Zarbin, MD, PhD, FARVO.
The journal covers a broad spectrum of work, including but not limited to:
Applications of stem cell technology for regenerative medicine,
Development of new animal models of human diseases,
Tissue bioengineering,
Chemical engineering to improve virus-based gene delivery,
Nanotechnology for drug delivery,
Design and synthesis of artificial extracellular matrices,
Development of a true microsurgical operating environment,
Refining data analysis algorithms to improve in vivo imaging technology,
Results of Phase 1 clinical trials,
Reverse translational ("bedside to bench") research.
TVST seeks manuscripts from scientists and clinicians with diverse backgrounds ranging from basic chemistry to ophthalmic surgery that will advance or change the way we understand and/or treat vision-threatening diseases. TVST encourages the use of color, multimedia, hyperlinks, program code and other digital enhancements.