{"title":"CMFNet:基于 OCTA 数据进行精确血管分割的跨维模态融合网络。","authors":"Siqi Wang, Xiaosheng Yu, Hao Wu, Ying Wang, Chengdong Wu","doi":"10.1007/s11517-024-03256-z","DOIUrl":null,"url":null,"abstract":"<p><p>Optical coherence tomography angiography (OCTA) is a novel non-invasive retinal vessel imaging technique that can display high-resolution 3D vessel structures. The quantitative analysis of retinal vessel morphology plays an important role in the automatic screening and diagnosis of fundus diseases. The existing segmentation methods struggle to effectively use the 3D volume data and 2D projection maps of OCTA images simultaneously, which leads to problems such as discontinuous microvessel segmentation results and deviation of morphological estimation. To enhance diagnostic support for fundus diseases, we propose a cross-dimensional modal fusion network (CMFNet) using both 3D volume data and 2D projection maps for accurate OCTA vessel segmentation. Firstly, we use different encoders to generate 2D projection features and 3D volume data features from projection maps and volume data, respectively. Secondly, we design an attentional cross-feature projection learning module to purify 3D volume data features and learn its projection features along the depth direction. Then, we develop a cross-dimensional hierarchical fusion module to effectively fuse coded features learned from the volume data and projection maps. In addition, we extract high-level semantic weight information and map it to the cross-dimensional hierarchical fusion process to enhance fusion performance. To validate the efficacy of our proposed method, we conducted experimental evaluations using the publicly available dataset: OCTA-500. The experimental results show that our method achieves state-of-the-art performance.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CMFNet: a cross-dimensional modal fusion network for accurate vessel segmentation based on OCTA data.\",\"authors\":\"Siqi Wang, Xiaosheng Yu, Hao Wu, Ying Wang, Chengdong Wu\",\"doi\":\"10.1007/s11517-024-03256-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Optical coherence tomography angiography (OCTA) is a novel non-invasive retinal vessel imaging technique that can display high-resolution 3D vessel structures. The quantitative analysis of retinal vessel morphology plays an important role in the automatic screening and diagnosis of fundus diseases. The existing segmentation methods struggle to effectively use the 3D volume data and 2D projection maps of OCTA images simultaneously, which leads to problems such as discontinuous microvessel segmentation results and deviation of morphological estimation. To enhance diagnostic support for fundus diseases, we propose a cross-dimensional modal fusion network (CMFNet) using both 3D volume data and 2D projection maps for accurate OCTA vessel segmentation. Firstly, we use different encoders to generate 2D projection features and 3D volume data features from projection maps and volume data, respectively. Secondly, we design an attentional cross-feature projection learning module to purify 3D volume data features and learn its projection features along the depth direction. Then, we develop a cross-dimensional hierarchical fusion module to effectively fuse coded features learned from the volume data and projection maps. In addition, we extract high-level semantic weight information and map it to the cross-dimensional hierarchical fusion process to enhance fusion performance. To validate the efficacy of our proposed method, we conducted experimental evaluations using the publicly available dataset: OCTA-500. The experimental results show that our method achieves state-of-the-art performance.</p>\",\"PeriodicalId\":49840,\"journal\":{\"name\":\"Medical & Biological Engineering & Computing\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical & Biological Engineering & Computing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s11517-024-03256-z\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical & Biological Engineering & Computing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11517-024-03256-z","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
CMFNet: a cross-dimensional modal fusion network for accurate vessel segmentation based on OCTA data.
Optical coherence tomography angiography (OCTA) is a novel non-invasive retinal vessel imaging technique that can display high-resolution 3D vessel structures. The quantitative analysis of retinal vessel morphology plays an important role in the automatic screening and diagnosis of fundus diseases. The existing segmentation methods struggle to effectively use the 3D volume data and 2D projection maps of OCTA images simultaneously, which leads to problems such as discontinuous microvessel segmentation results and deviation of morphological estimation. To enhance diagnostic support for fundus diseases, we propose a cross-dimensional modal fusion network (CMFNet) using both 3D volume data and 2D projection maps for accurate OCTA vessel segmentation. Firstly, we use different encoders to generate 2D projection features and 3D volume data features from projection maps and volume data, respectively. Secondly, we design an attentional cross-feature projection learning module to purify 3D volume data features and learn its projection features along the depth direction. Then, we develop a cross-dimensional hierarchical fusion module to effectively fuse coded features learned from the volume data and projection maps. In addition, we extract high-level semantic weight information and map it to the cross-dimensional hierarchical fusion process to enhance fusion performance. To validate the efficacy of our proposed method, we conducted experimental evaluations using the publicly available dataset: OCTA-500. The experimental results show that our method achieves state-of-the-art performance.
期刊介绍:
Founded in 1963, Medical & Biological Engineering & Computing (MBEC) continues to serve the biomedical engineering community, covering the entire spectrum of biomedical and clinical engineering. The journal presents exciting and vital experimental and theoretical developments in biomedical science and technology, and reports on advances in computer-based methodologies in these multidisciplinary subjects. The journal also incorporates new and evolving technologies including cellular engineering and molecular imaging.
MBEC publishes original research articles as well as reviews and technical notes. Its Rapid Communications category focuses on material of immediate value to the readership, while the Controversies section provides a forum to exchange views on selected issues, stimulating a vigorous and informed debate in this exciting and high profile field.
MBEC is an official journal of the International Federation of Medical and Biological Engineering (IFMBE).