Jianyong Li , Chengbei Li , Lei Yang , Yanhong Liu
{"title":"DDAF-Net:用于视网膜血管分割的双向注意融合网络","authors":"Jianyong Li , Chengbei Li , Lei Yang , Yanhong Liu","doi":"10.1016/j.bspc.2025.108829","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate and effective segmentation of retinal fundus vessels images plays a pivotal role in clinical diagnosis and treatment. However, due to some challenging factors, such as the intricate morphology, low contrast, high background noise, and class imbalance issue of retinal fundus vessels, etc, precise segmentation of retinal fundus vessels remains an exceedingly challenging task. In this paper, a Dual-Direction Attention Fusion Network, abbreviated as DDAF-Net, is presented for the automated segmentation of retinal fundus vessels. To enhance the feature extraction capability of the segmentation network, a dual-encoder block is proposed to obtain stronger feature information. In this case, recurrent convolutions are used in parallel with standard convolution to enable simultaneous extraction of detail information and global contextual information. In addition, to address the problem of loss of detail information caused by multiple pooling operations at the encoder part, a dual-direction skip connection is introduced between the encoder and decoder, to realize effective feature reutilization of fine-grained information and global contextual information to enhance the continuity of the network in blood vessel segmentation. Finally, a joint attention mechanism is proposed in the decoder part, incorporating channel, spatial, and scale attention, to improve the feature extraction capability against morphologically complex fine vessels and lesion-disturbed images. The experimental findings show that the segmentation model proposed in this paper, realizes the extraction of retinal fundus vascular detail information and global contextual information at the same time. In comparison to existing segmentation models, it exhibits superior performance.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108829"},"PeriodicalIF":4.9000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DDAF-Net: A Dual-Direction Attention Fusion Network for retinal vessel segmentation\",\"authors\":\"Jianyong Li , Chengbei Li , Lei Yang , Yanhong Liu\",\"doi\":\"10.1016/j.bspc.2025.108829\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Accurate and effective segmentation of retinal fundus vessels images plays a pivotal role in clinical diagnosis and treatment. However, due to some challenging factors, such as the intricate morphology, low contrast, high background noise, and class imbalance issue of retinal fundus vessels, etc, precise segmentation of retinal fundus vessels remains an exceedingly challenging task. In this paper, a Dual-Direction Attention Fusion Network, abbreviated as DDAF-Net, is presented for the automated segmentation of retinal fundus vessels. To enhance the feature extraction capability of the segmentation network, a dual-encoder block is proposed to obtain stronger feature information. In this case, recurrent convolutions are used in parallel with standard convolution to enable simultaneous extraction of detail information and global contextual information. In addition, to address the problem of loss of detail information caused by multiple pooling operations at the encoder part, a dual-direction skip connection is introduced between the encoder and decoder, to realize effective feature reutilization of fine-grained information and global contextual information to enhance the continuity of the network in blood vessel segmentation. Finally, a joint attention mechanism is proposed in the decoder part, incorporating channel, spatial, and scale attention, to improve the feature extraction capability against morphologically complex fine vessels and lesion-disturbed images. The experimental findings show that the segmentation model proposed in this paper, realizes the extraction of retinal fundus vascular detail information and global contextual information at the same time. In comparison to existing segmentation models, it exhibits superior performance.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"113 \",\"pages\":\"Article 108829\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809425013400\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425013400","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
DDAF-Net: A Dual-Direction Attention Fusion Network for retinal vessel segmentation
Accurate and effective segmentation of retinal fundus vessels images plays a pivotal role in clinical diagnosis and treatment. However, due to some challenging factors, such as the intricate morphology, low contrast, high background noise, and class imbalance issue of retinal fundus vessels, etc, precise segmentation of retinal fundus vessels remains an exceedingly challenging task. In this paper, a Dual-Direction Attention Fusion Network, abbreviated as DDAF-Net, is presented for the automated segmentation of retinal fundus vessels. To enhance the feature extraction capability of the segmentation network, a dual-encoder block is proposed to obtain stronger feature information. In this case, recurrent convolutions are used in parallel with standard convolution to enable simultaneous extraction of detail information and global contextual information. In addition, to address the problem of loss of detail information caused by multiple pooling operations at the encoder part, a dual-direction skip connection is introduced between the encoder and decoder, to realize effective feature reutilization of fine-grained information and global contextual information to enhance the continuity of the network in blood vessel segmentation. Finally, a joint attention mechanism is proposed in the decoder part, incorporating channel, spatial, and scale attention, to improve the feature extraction capability against morphologically complex fine vessels and lesion-disturbed images. The experimental findings show that the segmentation model proposed in this paper, realizes the extraction of retinal fundus vascular detail information and global contextual information at the same time. In comparison to existing segmentation models, it exhibits superior performance.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.