Yijia Chen , Jiapeng Li , Haoze Yu , Lin Qi , Yongchun Li
{"title":"基于熵优化和解剖先验的无源无监督域自适应眼底图像分割","authors":"Yijia Chen , Jiapeng Li , Haoze Yu , Lin Qi , Yongchun Li","doi":"10.1016/j.procs.2024.11.023","DOIUrl":null,"url":null,"abstract":"<div><div>This research focuses on fundus image segmentation within a source-free domain adaptation framework, where the availability of source images during the adaptation phase is limited due to privacy concerns. Although Source-Free Unsupervised Domain Adaptation (SFUDA) methods have seen significant innovative developments in recent years, they still face several challenges which include suboptimal performance due to substantial domain discrepancies, reliance on potentially noisy or inaccurate pseudo-labels during the adaptation process, and a lack of integration with domain-specific prior knowledge. To address these issues, this paper proposes a SFUDA framework via Entropy Optimization and Anatomical Priors (EOAPNet). To alleviate the influence of the divergence between the source and target domains, EOAPNet primarily evaluates the uncertainty (i.e., entropy) of predictions on target domain data and improves the model by focusing on high-entropy pixels or regions. Additionally, a weak-strong augmentation mean-teacher scheme is introduced in EOAPNet, which can enhance the accuracy of pseudo-labels and reduce error propagation. Thirdly, by integrating an anatomical knowledge-based class ratio prior into the overall loss function in the form of a Kullback–Leibler (KL) divergence, EOAPNet also incorporates expert domain knowledge. EOAPNet yields comparable results to several state-of-the-art adaptation techniques in experiments on two retinal image segmentation datasets involving the RIM-ONE-r3 and Drishti-GS datasets.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"250 ","pages":"Pages 182-187"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Source-Free Unsupervised Domain Adaptation Fundus Image Segmentation via Entropy Optimization and Anatomical Priors\",\"authors\":\"Yijia Chen , Jiapeng Li , Haoze Yu , Lin Qi , Yongchun Li\",\"doi\":\"10.1016/j.procs.2024.11.023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This research focuses on fundus image segmentation within a source-free domain adaptation framework, where the availability of source images during the adaptation phase is limited due to privacy concerns. Although Source-Free Unsupervised Domain Adaptation (SFUDA) methods have seen significant innovative developments in recent years, they still face several challenges which include suboptimal performance due to substantial domain discrepancies, reliance on potentially noisy or inaccurate pseudo-labels during the adaptation process, and a lack of integration with domain-specific prior knowledge. To address these issues, this paper proposes a SFUDA framework via Entropy Optimization and Anatomical Priors (EOAPNet). To alleviate the influence of the divergence between the source and target domains, EOAPNet primarily evaluates the uncertainty (i.e., entropy) of predictions on target domain data and improves the model by focusing on high-entropy pixels or regions. Additionally, a weak-strong augmentation mean-teacher scheme is introduced in EOAPNet, which can enhance the accuracy of pseudo-labels and reduce error propagation. Thirdly, by integrating an anatomical knowledge-based class ratio prior into the overall loss function in the form of a Kullback–Leibler (KL) divergence, EOAPNet also incorporates expert domain knowledge. EOAPNet yields comparable results to several state-of-the-art adaptation techniques in experiments on two retinal image segmentation datasets involving the RIM-ONE-r3 and Drishti-GS datasets.</div></div>\",\"PeriodicalId\":20465,\"journal\":{\"name\":\"Procedia Computer Science\",\"volume\":\"250 \",\"pages\":\"Pages 182-187\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Procedia Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1877050924032332\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877050924032332","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Source-Free Unsupervised Domain Adaptation Fundus Image Segmentation via Entropy Optimization and Anatomical Priors
This research focuses on fundus image segmentation within a source-free domain adaptation framework, where the availability of source images during the adaptation phase is limited due to privacy concerns. Although Source-Free Unsupervised Domain Adaptation (SFUDA) methods have seen significant innovative developments in recent years, they still face several challenges which include suboptimal performance due to substantial domain discrepancies, reliance on potentially noisy or inaccurate pseudo-labels during the adaptation process, and a lack of integration with domain-specific prior knowledge. To address these issues, this paper proposes a SFUDA framework via Entropy Optimization and Anatomical Priors (EOAPNet). To alleviate the influence of the divergence between the source and target domains, EOAPNet primarily evaluates the uncertainty (i.e., entropy) of predictions on target domain data and improves the model by focusing on high-entropy pixels or regions. Additionally, a weak-strong augmentation mean-teacher scheme is introduced in EOAPNet, which can enhance the accuracy of pseudo-labels and reduce error propagation. Thirdly, by integrating an anatomical knowledge-based class ratio prior into the overall loss function in the form of a Kullback–Leibler (KL) divergence, EOAPNet also incorporates expert domain knowledge. EOAPNet yields comparable results to several state-of-the-art adaptation techniques in experiments on two retinal image segmentation datasets involving the RIM-ONE-r3 and Drishti-GS datasets.