IEEE transactions on medical imaging最新文献

筛选
英文 中文
Dual-Scanning Photoacoustic Endomicroscopy for High-Speed Gastrointestinal Microvascular Imaging 双扫描光声内镜用于高速胃肠道微血管成像
IEEE transactions on medical imaging Pub Date : 2025-02-21 DOI: 10.1109/TMI.2025.3544403
Hongdian Sun;Xiao Liang;Linyang Li;Yuanlong Zhao;Heng Guo;Weizhi Qi;Lei Xi
{"title":"Dual-Scanning Photoacoustic Endomicroscopy for High-Speed Gastrointestinal Microvascular Imaging","authors":"Hongdian Sun;Xiao Liang;Linyang Li;Yuanlong Zhao;Heng Guo;Weizhi Qi;Lei Xi","doi":"10.1109/TMI.2025.3544403","DOIUrl":"10.1109/TMI.2025.3544403","url":null,"abstract":"Photoacoustic endomicroscopy enables high-resolution imaging of deep microvasculature within the gastrointestinal wall using modulated laser pulses with point-by-point scanning. However, conventional scanning mechanisms frequently encounter difficulties in balancing imaging speed and field of view, particularly when imaging the peristaltic gastrointestinal tract. To address this challenge, we propose a dual-scanning photoacoustic endomicroscopy with an adjustable focal plane and an ultrafast imaging speed. The probe features two distinct scanning modes: 360° angular scanning providing a wide field of view, and regional spiral scanning offering high image quality. We demonstrated the capability of this probe through imaging both phantoms and rat rectums. The results from the rectal injury model demonstrate the applicability and sensitivity of the probe. Overall, this study offers new perspectives for expanding the applications and clinical potential of photoacoustic endomicroscopy.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2709-2717"},"PeriodicalIF":0.0,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Organ-DETR: Organ Detection via Transformers 器官- detr:通过变压器进行器官检测
IEEE transactions on medical imaging Pub Date : 2025-02-19 DOI: 10.1109/TMI.2025.3543581
Morteza Ghahremani;Benjamin Raphael Ernhofer;Jiajun Wang;Marcus Makowski;Christian Wachinger
{"title":"Organ-DETR: Organ Detection via Transformers","authors":"Morteza Ghahremani;Benjamin Raphael Ernhofer;Jiajun Wang;Marcus Makowski;Christian Wachinger","doi":"10.1109/TMI.2025.3543581","DOIUrl":"10.1109/TMI.2025.3543581","url":null,"abstract":"Query-based Transformers have been yielding impressive performance in object localization and detection tasks. However, their application to organ detection in 3D medical imaging data has been relatively unexplored. This study introduces Organ-DETR, featuring two innovative modules, MultiScale Attention (MSA) and Dense Query Matching (DQM), designed to enhance the performance of Detection Transformers (DETRs) for 3D organ detection. MSA is a novel top-down representation learning approach for efficiently encoding Computed Tomography (CT) features. This architecture employs a multiscale attention mechanism, utilizing both dual self-attention and cross-scale attention mechanisms to extract intra- and inter-scale spatial interactions in the attention mechanism. Organ-DETR also introduces DQM, an approach for one-to-many matching that tackles the label assignment difficulties in organ detection. DQM increases positive queries to enhance both recall scores and training efficiency without the need for additional learnable parameters. Extensive results on five 3D CT datasets indicate that the proposed Organ-DETR outperforms comparable techniques by achieving a remarkable improvement of +10.6 mAP COCO. The project and code are available at <uri>https://github.com/ai-med/OrganDETR</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2657-2671"},"PeriodicalIF":0.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10892276","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143452341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Flagship Toward the Future 面向未来的旗舰
IEEE transactions on medical imaging Pub Date : 2025-02-17 DOI: 10.1109/TMI.2025.3543049
Ge Wang
{"title":"Editorial Flagship Toward the Future","authors":"Ge Wang","doi":"10.1109/TMI.2025.3543049","DOIUrl":"10.1109/TMI.2025.3543049","url":null,"abstract":"This editorial presents the vision and strategic direction of IEEE Transactions on Medical Imaging (TMI) under new leadership. Key points include restructuring the editorial board to enhance efficiency and diversity, streamlining the peer review process to improve decision quality and speed, and launching the <italic>AI for TMI</i> (AI4TMI) initiative to integrate AI in journal management. Through these efforts, TMI aims to sustain excellence, adapt to emerging trends, and shape the future of medical imaging research.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1113-1114"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10891575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross- and Intra-Image Prototypical Learning for Multi-Label Disease Diagnosis and Interpretation 用于多标签疾病诊断和解释的交叉和图像内原型学习
IEEE transactions on medical imaging Pub Date : 2025-02-14 DOI: 10.1109/TMI.2025.3541830
Chong Wang;Fengbei Liu;Yuanhong Chen;Helen Frazer;Gustavo Carneiro
{"title":"Cross- and Intra-Image Prototypical Learning for Multi-Label Disease Diagnosis and Interpretation","authors":"Chong Wang;Fengbei Liu;Yuanhong Chen;Helen Frazer;Gustavo Carneiro","doi":"10.1109/TMI.2025.3541830","DOIUrl":"10.1109/TMI.2025.3541830","url":null,"abstract":"Recent advances in prototypical learning have shown remarkable potential to provide useful decision interpretations associating activation maps and predictions with class-specific training prototypes. Such prototypical learning has been well-studied for various single-label diseases, but for quite relevant and more challenging multi-label diagnosis, where multiple diseases are often concurrent within an image, existing prototypical learning models struggle to obtain meaningful activation maps and effective class prototypes due to the entanglement of the multiple diseases. In this paper, we present a novel Cross- and Intra-image Prototypical Learning (CIPL) framework, for accurate multi-label disease diagnosis and interpretation from medical images. CIPL takes advantage of common cross-image semantics to disentangle the multiple diseases when learning the prototypes, allowing a comprehensive understanding of complicated pathological lesions. Furthermore, we propose a new two-level alignment-based regularisation strategy that effectively leverages consistent intra-image information to enhance interpretation robustness and predictive performance. Extensive experiments show that our CIPL attains the state-of-the-art (SOTA) classification accuracy in two public multi-label benchmarks of disease diagnosis: thoracic radiography and fundus images. Quantitative interpretability results show that CIPL also has superiority in weakly-supervised thoracic disease localisation over other leading saliency- and prototype-based explanation methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2568-2580"},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransMatch: Employing Bridging Strategy to Overcome Large Deformation for Feature Matching in Gastroscopy Scenario TransMatch:利用桥接策略克服胃镜场景中大变形的特征匹配
IEEE transactions on medical imaging Pub Date : 2025-02-13 DOI: 10.1109/TMI.2025.3541433
Guosong Zhu;Zhen Qin;Linfang Yu;Yi Ding;Zhiguang Qin
{"title":"TransMatch: Employing Bridging Strategy to Overcome Large Deformation for Feature Matching in Gastroscopy Scenario","authors":"Guosong Zhu;Zhen Qin;Linfang Yu;Yi Ding;Zhiguang Qin","doi":"10.1109/TMI.2025.3541433","DOIUrl":"10.1109/TMI.2025.3541433","url":null,"abstract":"Feature matching is widely applied in the image processing field. However, both traditional feature matching methods and previous deep learning-based methods struggle to accurately match the features with severe deformations and large displacements, particularly in gastroscopy scenario. To fill this gap, an effective feature matching framework named TransMatch is proposed, which addresses the largely displacements issue by matching features with global information leveraged via Transformer structure. To address the severe deformation of features, an effective bridging strategy with a novel bidirectional quadratic interpolation network is employed. This bridging strategy decomposes and simplifies the matching of features undergoing severe deformations. A deblurring module for gastroscopy scenario is specifically designed to address the potential blurriness. Experiments have illustrated that proposed method achieves state-of-the-art performance of feature matching and frame interpolation in gastroscopy scenario. Moreover, a large-scale gastroscopy dataset is also constructed for multiple tasks.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2643-2656"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Method for Correcting the Muscle Fiber Orientation Determined by a T-Shaped Transducer in Ultrasound Shear Wave Elastography 超声横波弹性成像中t形传感器测定肌纤维方向的校正方法
IEEE transactions on medical imaging Pub Date : 2025-02-13 DOI: 10.1109/TMI.2025.3541321
Chien Chen;Guo-Xuan Xu;Wei-Ren Su;Chih-Chung Huang
{"title":"Method for Correcting the Muscle Fiber Orientation Determined by a T-Shaped Transducer in Ultrasound Shear Wave Elastography","authors":"Chien Chen;Guo-Xuan Xu;Wei-Ren Su;Chih-Chung Huang","doi":"10.1109/TMI.2025.3541321","DOIUrl":"10.1109/TMI.2025.3541321","url":null,"abstract":"Shear wave elastography (SWE) is a quantitative imaging method that could be used for clinical assessment of musculoskeletal stiffness, particularly in disease diagnosis and rehabilitation evaluation. However, the elastic anisotropy of skeletal muscle leads to uncertainties in shear wave velocity (SWV) measurements in SWE because the SWV varies with muscle fiber orientation. Therefore, many studies have conducted 360° rotational measurements of SWV to determine the elastic anisotropy of muscle; however, the extended data acquisition time of this approach limits its clinical utility. In this study, a T-shaped transducer was used for rapidly measuring the longitudinal and transverse SWVs (<inline-formula> <tex-math>$textit {SWV}_{L}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$textit {SWV}_{T}$ </tex-math></inline-formula>) of muscle through an ellipse fitting method to estimate the fiber orientation angle when the excitation is normal to the material axis. The performance of this approach was examined by conducting a homogeneous elastic phantom experiment, which indicated that the proposed T-shaped transducer generated shear waves in three directions by applying a supersonic push at the junction of the transducer. The error between the measured SWVs and ground truth was approximately 6.5%. The proposed T-shaped transducer was also used to measure the SWV in the biceps brachii of four healthy individuals. The <inline-formula> <tex-math>$textit {SWV}_{L}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$textit {SWV}_{T}$ </tex-math></inline-formula> values measured with this transducer were 2.47 and 1.09 m/s, respectively, which were consistent with the SWVs obtained under 360° rotation and in the literature (an error of ~4%). All experimental results were consistent with the results obtained under 360° rotation, which indicates that the proposed method enables the rapid and stable estimation of muscle fiber orientation in SWE.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2528-2540"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction Across Various Sampling Rates CT- sdm:不同采样率下稀疏视图CT重建的采样扩散模型
IEEE transactions on medical imaging Pub Date : 2025-02-13 DOI: 10.1109/TMI.2025.3541491
Liutao Yang;Jiahao Huang;Guang Yang;Daoqiang Zhang
{"title":"CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction Across Various Sampling Rates","authors":"Liutao Yang;Jiahao Huang;Guang Yang;Daoqiang Zhang","doi":"10.1109/TMI.2025.3541491","DOIUrl":"10.1109/TMI.2025.3541491","url":null,"abstract":"Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose. Because of the reduced number of projection views, traditional reconstruction methods can lead to severe artifacts. Recently, research studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT). However, given the limitations on the generalization capability of deep learning models, current methods usually train models on fixed sampling rates, affecting the usability and flexibility of model deployment in real clinical settings. To address this issue, our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at various sampling rate. Specifically, we design a novel imaging degradation operator in the proposed sampling diffusion model for SVCT (CT-SDM) to simulate the projection process in the sinogram domain. Thus, the CT-SDM can gradually add projection views to highly undersampled measurements to generalize the full-view sinograms. By choosing an appropriate starting point in diffusion inference, the proposed model can recover the full-view sinograms from various sampling rate with only one trained model. Experiments on several datasets have verified the effectiveness and robustness of our approach, demonstrating its superiority in reconstructing high-quality images from sparse-view CT scans across various sampling rates.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2581-2593"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masked Vascular Structure Segmentation and Completion in Retinal Images 视网膜图像中蒙膜血管结构的分割与补全
IEEE transactions on medical imaging Pub Date : 2025-02-13 DOI: 10.1109/TMI.2025.3538336
Yi Zhou;Thiara Sana Ahmed;Meng Wang;Eric A. Newman;Leopold Schmetterer;Huazhu Fu;Jun Cheng;Bingyao Tan
{"title":"Masked Vascular Structure Segmentation and Completion in Retinal Images","authors":"Yi Zhou;Thiara Sana Ahmed;Meng Wang;Eric A. Newman;Leopold Schmetterer;Huazhu Fu;Jun Cheng;Bingyao Tan","doi":"10.1109/TMI.2025.3538336","DOIUrl":"10.1109/TMI.2025.3538336","url":null,"abstract":"Early retinal vascular changes in diseases such as diabetic retinopathy often occur at a microscopic level. Accurate evaluation of retinal vascular networks at a micro-level could significantly improve our understanding of angiopathology and potentially aid ophthalmologists in disease assessment and management. Multiple angiogram-related retinal imaging modalities, including fundus, optical coherence tomography angiography, and fluorescence angiography, project continuous, inter-connected retinal microvascular networks into imaging domains. However, extracting the microvascular network, which includes arterioles, venules, and capillaries, is challenging due to the limited contrast and resolution. As a result, the vascular network often appears as fragmented segments. In this paper, we propose a backbone-agnostic Masked Vascular Structure Segmentation and Completion (MaskVSC) method to reconstruct the retinal vascular network. MaskVSC simulates missing sections of blood vessels and uses this simulation to train the model to predict the missing parts and their connections. This approach simulates highly heterogeneous forms of vessel breaks and mitigates the need for massive data labeling. Accordingly, we introduce a connectivity loss function that penalizes interruptions in the vascular network. Our findings show that masking 40% of the segments yields optimal performance in reconstructing the interconnected vascular network. We test our method on three different types of retinal images across five separate datasets. The results demonstrate that MaskVSC outperforms state-of-the-art methods in maintaining vascular network completeness and segmentation accuracy. Furthermore, MaskVSC has been introduced to different segmentation backbones and has successfully improved performance. The code and 2PFM data are available at: <uri>https://github.com/Zhouyi-Zura/MaskVSC</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2492-2503"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10887048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GVM-Net: A GNN-Based Vessel Matching Network for 2D/3D Non-Rigid Coronary Artery Registration GVM-Net:基于gnn的二维/三维非刚性冠状动脉配准血管匹配网络
IEEE transactions on medical imaging Pub Date : 2025-02-13 DOI: 10.1109/TMI.2025.3540906
Yankai Chen;Guanyu Li;Chunming Li;Wei Yu;Zehao Fan;Jingfeng Bai;Shengxian Tu
{"title":"GVM-Net: A GNN-Based Vessel Matching Network for 2D/3D Non-Rigid Coronary Artery Registration","authors":"Yankai Chen;Guanyu Li;Chunming Li;Wei Yu;Zehao Fan;Jingfeng Bai;Shengxian Tu","doi":"10.1109/TMI.2025.3540906","DOIUrl":"10.1109/TMI.2025.3540906","url":null,"abstract":"The registration of coronary artery structures from preoperative coronary computed tomography angiography to intraoperative coronary angiography is of great interest to improve guidance in percutaneous coronary interventions. However, non-rigid deformation and discrepancies in both dimensions and topology between the two imaging modalities present a challenge in the 2D/3D coronary artery registration. In this study, we address this problem by formulating it as a centerline feature matching task and propose a GNN-based vessel matching network (GVM-Net) to establish dense correspondence between different image modalities in an end-to-end manner. GVM-Net considers centerline points as nodes in graphs and effectively models the complex topological relationships between them through attention mechanisms and message passing. Furthermore, by incorporating redundant rows and columns into the matching matrix, GVM-Net can effectively handle inconsistencies in vascular structures. We also introduce the query-based nodes grouping module, which clusters nodes in the feature space to further explore the topological relationships. GVM-Net achieves an average F1-score of 89.74% with a mean pixel distance of 0.48 pixels on the synthetic dataset with 276 data pairs and an average F1-score of 83.35% with a mean error of 1.52 mm in 55 manually labeled clinical cases, both exceeding existing feature matching methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2617-2630"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSCA: A Digital Subtraction Angiography Sequence Dataset and Spatio-Temporal Model for Cerebral Artery Segmentation 数字减影血管造影序列数据集和脑动脉分割的时空模型
IEEE transactions on medical imaging Pub Date : 2025-02-13 DOI: 10.1109/TMI.2025.3540886
Jiong Zhang;Qihang Xie;Lei Mou;Dan Zhang;Da Chen;Caifeng Shan;Yitian Zhao;Ruisheng Su;Mengguo Guo
{"title":"DSCA: A Digital Subtraction Angiography Sequence Dataset and Spatio-Temporal Model for Cerebral Artery Segmentation","authors":"Jiong Zhang;Qihang Xie;Lei Mou;Dan Zhang;Da Chen;Caifeng Shan;Yitian Zhao;Ruisheng Su;Mengguo Guo","doi":"10.1109/TMI.2025.3540886","DOIUrl":"10.1109/TMI.2025.3540886","url":null,"abstract":"Cerebrovascular diseases (CVDs) remain a leading cause of global disability and mortality. Digital Subtraction Angiography (DSA) sequences, recognized as the gold standard for diagnosing CVDs, can clearly visualize the dynamic flow and reveal pathological conditions within the cerebrovasculature. Therefore, precise segmentation of cerebral arteries (CAs) and classification between their main trunks and branches are crucial for physicians to accurately quantify diseases. However, achieving accurate CA segmentation in DSA sequences remains a challenging task due to small vessels with low contrast, and ambiguity between vessels and residual skull structures. Moreover, the lack of publicly available datasets limits exploration in the field. In this paper, we introduce a DSA Sequence-based Cerebral Artery segmentation dataset (DSCA), the publicly accessible dataset designed specifically for pixel-level semantic segmentation of CAs. Additionally, we propose DSANet, a spatio-temporal network for CA segmentation in DSA sequences. Unlike existing DSA segmentation methods that focus only on a single frame, the proposed DSANet introduces a separate temporal encoding branch to capture dynamic vessel details across multiple frames. To enhance small vessel segmentation and improve vessel connectivity, we design a novel TemporalFormer module to capture global context and correlations among sequential frames. Furthermore, we develop a Spatio-Temporal Fusion (STF) module to effectively integrate spatial and temporal features from the encoder. Extensive experiments demonstrate that DSANet outperforms other state-of-the-art methods in CA segmentation, achieving a Dice of 0.9033.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2515-2527"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10884618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143485774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信