{"title":"通过样式随机化和样式归一化增强视网膜图像分割的跨域泛化","authors":"Song Guo","doi":"10.1016/j.imavis.2025.105694","DOIUrl":null,"url":null,"abstract":"<div><div>Retinal image segmentation is a crucial procedure for automatically diagnosing ophthalmic diseases. However, existing deep learning-based segmentation models suffer from the domain shift issue, i.e., the segmentation accuracy decreases significantly when the test and training images are sampled from different distributions. To overcome this issue, we focus on the challenging single-source domain generalization scenario, where we expect to train a well-generalized segmentation model on unseen test domains with only access to one domain during training. In this paper, we present a style randomization method, which performs random scaling transformation to the LAB components of the training image, to enrich the style diversity. Furthermore, we present a style normalization method to effectively normalize style information while preserving content by channel-wise feature standardization and dynamic feature affine transformation. Our approach is evaluated on four types of retinal image segmentation tasks, including retinal vessel, optic cup, optic disc, and hard exudate. Experimental results demonstrate that our method achieves competitive or superior performance compared to state-of-the-art approaches. Specifically, it outperforms the second-best method by 3.9%, 2.6%, and 4.8% on vessel, optic cup, and hard exudate segmentation tasks, respectively. Our code will be released at <span><span>https://github.com/guomugong/SRN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105694"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing cross-domain generalization in retinal image segmentation via style randomization and style normalization\",\"authors\":\"Song Guo\",\"doi\":\"10.1016/j.imavis.2025.105694\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Retinal image segmentation is a crucial procedure for automatically diagnosing ophthalmic diseases. However, existing deep learning-based segmentation models suffer from the domain shift issue, i.e., the segmentation accuracy decreases significantly when the test and training images are sampled from different distributions. To overcome this issue, we focus on the challenging single-source domain generalization scenario, where we expect to train a well-generalized segmentation model on unseen test domains with only access to one domain during training. In this paper, we present a style randomization method, which performs random scaling transformation to the LAB components of the training image, to enrich the style diversity. Furthermore, we present a style normalization method to effectively normalize style information while preserving content by channel-wise feature standardization and dynamic feature affine transformation. Our approach is evaluated on four types of retinal image segmentation tasks, including retinal vessel, optic cup, optic disc, and hard exudate. Experimental results demonstrate that our method achieves competitive or superior performance compared to state-of-the-art approaches. Specifically, it outperforms the second-best method by 3.9%, 2.6%, and 4.8% on vessel, optic cup, and hard exudate segmentation tasks, respectively. Our code will be released at <span><span>https://github.com/guomugong/SRN</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"162 \",\"pages\":\"Article 105694\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625002823\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002823","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhancing cross-domain generalization in retinal image segmentation via style randomization and style normalization
Retinal image segmentation is a crucial procedure for automatically diagnosing ophthalmic diseases. However, existing deep learning-based segmentation models suffer from the domain shift issue, i.e., the segmentation accuracy decreases significantly when the test and training images are sampled from different distributions. To overcome this issue, we focus on the challenging single-source domain generalization scenario, where we expect to train a well-generalized segmentation model on unseen test domains with only access to one domain during training. In this paper, we present a style randomization method, which performs random scaling transformation to the LAB components of the training image, to enrich the style diversity. Furthermore, we present a style normalization method to effectively normalize style information while preserving content by channel-wise feature standardization and dynamic feature affine transformation. Our approach is evaluated on four types of retinal image segmentation tasks, including retinal vessel, optic cup, optic disc, and hard exudate. Experimental results demonstrate that our method achieves competitive or superior performance compared to state-of-the-art approaches. Specifically, it outperforms the second-best method by 3.9%, 2.6%, and 4.8% on vessel, optic cup, and hard exudate segmentation tasks, respectively. Our code will be released at https://github.com/guomugong/SRN.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.