Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Multi-modal pre-post treatment consistency learning for automatic segmentation and evaluation of the Circle of Willis 基于多模态治疗前后一致性学习的Willis圈自动分割与评价
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-08 DOI: 10.1016/j.compmedimag.2025.102521
Zehang Lin , Yusheng Liu , Jiahua Wu , Da-Han Wang , Xu-Yao Zhang , Shunzhi Zhu
{"title":"Multi-modal pre-post treatment consistency learning for automatic segmentation and evaluation of the Circle of Willis","authors":"Zehang Lin ,&nbsp;Yusheng Liu ,&nbsp;Jiahua Wu ,&nbsp;Da-Han Wang ,&nbsp;Xu-Yao Zhang ,&nbsp;Shunzhi Zhu","doi":"10.1016/j.compmedimag.2025.102521","DOIUrl":"10.1016/j.compmedimag.2025.102521","url":null,"abstract":"<div><div>The Circle of Willis (CoW) is a crucial vascular structure in the brain, vital for diagnosing vascular diseases. During the acute phase of diseases, CT angiography (CTA) is commonly used to locate occlusions within the CoW quickly. After treatment, MR angiography (MRA) is preferred to visualize postoperative vascular structures, reducing radiation exposure. Clinically, the pre- and post-treatment (P&amp;P-T) changes in the CoW are critical for assessing treatment efficacy. However, previous studies focused on single-modality segmentation, leading to cumulative errors when segmenting CoW in CTA and MRA modalities separately. Thus, it is challenging to differentiate whether changes in the CoW are due to segmentation errors or actual therapeutic effects when evaluating treatment efficacy. To address these challenges, we propose a comprehensive framework integrating the Cross-Modal Semantic Consistency Network (CMSC-Net) for segmentation and the Semantic Consistency Evaluation Network (SC-ENet) for treatment evaluation. Specifically, CMSC-Net includes two key components: the Modality Pair Alignment Module (MPAM), which generates spatially aligned modality pairs (CTA-MRA, MRA-CTA) to mitigate imaging discrepancies, and the Cross-Modal Attention Module (CMAM), which enhances CTA segmentation by leveraging high-resolution MRA features. Additionally, a novel loss function ensures semantic consistency across modalities, supporting stable network convergence. Meanwhile, SC-ENet automates treatment efficacy evaluation by extracting static vascular features and dynamically tracking morphological changes over time. Experimental results show that CTMSC-Net achieves consistent CoW segmentation across modalities, with SC-ENet delivering high-precision treatment evaluation.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102521"},"PeriodicalIF":5.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143637560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training UniBrain:基于分层知识增强预训练的通用脑MRI诊断
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-07 DOI: 10.1016/j.compmedimag.2025.102516
Jiayu Lei , Lisong Dai , Haoyun Jiang , Chaoyi Wu , Xiaoman Zhang , Yao Zhang , Jiangchao Yao , Weidi Xie , Yanyong Zhang , Yuehua Li , Ya Zhang , Yanfeng Wang
{"title":"UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training","authors":"Jiayu Lei ,&nbsp;Lisong Dai ,&nbsp;Haoyun Jiang ,&nbsp;Chaoyi Wu ,&nbsp;Xiaoman Zhang ,&nbsp;Yao Zhang ,&nbsp;Jiangchao Yao ,&nbsp;Weidi Xie ,&nbsp;Yanyong Zhang ,&nbsp;Yuehua Li ,&nbsp;Ya Zhang ,&nbsp;Yanfeng Wang","doi":"10.1016/j.compmedimag.2025.102516","DOIUrl":"10.1016/j.compmedimag.2025.102516","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at <span><span>https://github.com/ljy19970415/UniBrain</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102516"},"PeriodicalIF":5.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain adaptive detection framework for multi-center bone tumor detection on radiographs x线片上多中心骨肿瘤检测的域自适应检测框架
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-07 DOI: 10.1016/j.compmedimag.2025.102522
Bing Li , Danyang Xu , Hongxin Lin , Ruodai Wu , Songxiong Wu , Jingjing Shao , Jinxiang Zhang , Haiyang Dai , Dan Wei , Bingsheng Huang , Zhenhua Gao , Xianfen Diao
{"title":"Domain adaptive detection framework for multi-center bone tumor detection on radiographs","authors":"Bing Li ,&nbsp;Danyang Xu ,&nbsp;Hongxin Lin ,&nbsp;Ruodai Wu ,&nbsp;Songxiong Wu ,&nbsp;Jingjing Shao ,&nbsp;Jinxiang Zhang ,&nbsp;Haiyang Dai ,&nbsp;Dan Wei ,&nbsp;Bingsheng Huang ,&nbsp;Zhenhua Gao ,&nbsp;Xianfen Diao","doi":"10.1016/j.compmedimag.2025.102522","DOIUrl":"10.1016/j.compmedimag.2025.102522","url":null,"abstract":"<div><div>Automatic bone tumor detection on radiographs is crucial for reducing mortality from bone cancer. However, the performance of the detection methods may be considerably affected when deployed to bone tumor data in a distinct domain, which could be attributed to the differences in the imaging process and can be solved by training with a large amount of annotated data. However, these data are difficult to obtain in clinical practice. To address this challenge, we propose a domain-adaptive (DA) detection framework to effectively bridge the domain gap of bone tumor radiographs across centers, consisting of four parts: a multilevel feature alignment module (MFAM) for image-level alignment, Wasserstein distance critic (WDC) for quantization of feature distance, instance feature alignment module (IFAM) for instance-level alignment, and consistency regularization module (CRM), which maintains the consistency between the domain predictions of MFAM and IFAM. The experimental results indicated that our framework can improve average precision (AP) with an intersection over union threshold of 0.2 (AP@20) on the source and target domain test sets by 1 % and 8.9 %, respectively. Moreover, we designed a domain discriminator with an attention mechanism to improve the efficiency and performance of the domain-adaptative bone tumor detection model, which further improved the AP@20 on the source and target domain test sets by 2 % and 10.7 %, respectively. The proposed DA model is expected to bridge the domain gap and address the generalization problem across multiple centers.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102522"},"PeriodicalIF":5.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy 用于宫颈癌近距离放疗的间质引导临床肿瘤体积自动分割网络
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-06 DOI: 10.1016/j.compmedimag.2025.102520
Shudong Tan , Jiahui He , Ming Cui , Yuhua Gao , Deyu Sun , Yaoqin Xie , Jing Cai , Nazar Zaki , Wenjian Qin
{"title":"Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy","authors":"Shudong Tan ,&nbsp;Jiahui He ,&nbsp;Ming Cui ,&nbsp;Yuhua Gao ,&nbsp;Deyu Sun ,&nbsp;Yaoqin Xie ,&nbsp;Jing Cai ,&nbsp;Nazar Zaki ,&nbsp;Wenjian Qin","doi":"10.1016/j.compmedimag.2025.102520","DOIUrl":"10.1016/j.compmedimag.2025.102520","url":null,"abstract":"<div><div>Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model’s capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102520"},"PeriodicalIF":5.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143687869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A unified approach to medical image segmentation by leveraging mixed supervision and self and transfer learning (MIST) 基于混合监督和自迁移学习的医学图像分割方法
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-05 DOI: 10.1016/j.compmedimag.2025.102517
Jianfei Liu , Sayantan Bhadra , Omid Shafaat , Pritam Mukherjee , Christopher Parnell , Ronald M. Summers
{"title":"A unified approach to medical image segmentation by leveraging mixed supervision and self and transfer learning (MIST)","authors":"Jianfei Liu ,&nbsp;Sayantan Bhadra ,&nbsp;Omid Shafaat ,&nbsp;Pritam Mukherjee ,&nbsp;Christopher Parnell ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2025.102517","DOIUrl":"10.1016/j.compmedimag.2025.102517","url":null,"abstract":"<div><div>Medical image segmentation is important for quantitative disease diagnosis and treatment but relies on accurate pixel-wise labels, which are costly, time-consuming, and require domain expertise. This work introduces MIST (MIxed supervision, Self, and Transfer learning) to reduce manual labeling in medical image segmentation. A small set of cases was manually annotated (“strong labels”), while the rest used automated, less accurate labels (“weak labels”). Both label types trained a dual-branch network with a shared encoder and two decoders. Self-training iteratively refined weak labels, and transfer learning reduced computational costs by freezing the encoder and fine-tuning the decoders. Applied to segmenting muscle, subcutaneous, and visceral adipose tissue, MIST used only 100 manually labeled slices from 20 CT scans to generate accurate labels for all slices of 102 internal scans, which were then used to train a 3D nnU-Net model. Using MIST to update weak labels significantly improved nnU-Net segmentation accuracy compared to training directly on strong and weak labels. Dice similarity coefficient (DSC) increased for muscle (89.2 ± 4.3% to 93.2 ± 2.1%), subcutaneous (75.1 ± 14.4% to 94.2 ± 2.8%), and visceral adipose tissue (66.6 ± 16.4% to 77.1 ± 19.0% ) on an internal dataset (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). DSC improved for muscle (80.5 ± 6.9% to 86.6 ± 3.9%) and subcutaneous adipose tissue (61.8 ± 12.5% to 82.7 ± 11.1%) on an external dataset (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). MIST reduced the annotation burden by 99%, enabling efficient, accurate pixel-wise labeling for medical image segmentation. Code is available at <span><span>https://github.com/rsummers11/NIH_CADLab_Body_Composition</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102517"},"PeriodicalIF":5.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encoding 3D information in 2D feature maps for brain CT-Angiography 在脑ct血管造影的2D特征图中编码3D信息
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-05 DOI: 10.1016/j.compmedimag.2025.102518
Uma M. Lal-Trehan Estrada , Sunil Sheth , Arnau Oliver , Xavier Lladó , Luca Giancardo
{"title":"Encoding 3D information in 2D feature maps for brain CT-Angiography","authors":"Uma M. Lal-Trehan Estrada ,&nbsp;Sunil Sheth ,&nbsp;Arnau Oliver ,&nbsp;Xavier Lladó ,&nbsp;Luca Giancardo","doi":"10.1016/j.compmedimag.2025.102518","DOIUrl":"10.1016/j.compmedimag.2025.102518","url":null,"abstract":"<div><div>We propose learnable 3D pooling (L3P), a CNN module designed to compress 3D information into 2D feature maps using anisotropic convolutions and unidirectional max pooling. Specifically, we used L3P followed by a 2D network to generate predictions from 3D brain CT-Angiography (CTA) in the context of large vessel occlusion (LVO). To further demonstrate its versatility, we extended its application to 3D brain MRI analysis for brain age prediction. First, we designed an experiment to classify the LVO-affected hemisphere (left or right), projecting the input CTA into the sagittal plane, which allowed to assess the ability of L3P to encode the 3D location where the location information was in the 3D-to-2D compression axis. Second, we evaluated the use of L3P on LVO detection as a binary classification (presence or absence). We compared the L3P models performance to that of 2D and stroke-specific 3D models. L3P models achieved results equivalent to stroke-specific 3D models while requiring fewer parameters and resources and provided better results than 2D models using maximum intensity projection images as input. The generalizability of L3P approach was evaluated on the LVO-affected hemisphere detection using data from a single site for training/validation and data from 36 other sites for testing, achieving an AUC of 0.83 on the test set. L3P also performed comparably or better than a fully 3D network on a brain age prediction task with a separate T1 MRI dataset, demonstrating its versatility across different tasks and imaging modalities. Additionally, L3P models generated more interpretable feature maps.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102518"},"PeriodicalIF":5.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calcium deblooming in coronary computed tomography angiography via semantic-oriented generative adversarial network 基于语义导向生成对抗网络的冠状动脉ct血管造影钙脱影研究
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-02-24 DOI: 10.1016/j.compmedimag.2025.102515
Huiyu Zhao , Wangshu Zhu , Luyuan Jin , Yijia Xiong , Xiao Deng , Yuehua Li , Weiwen Zou
{"title":"Calcium deblooming in coronary computed tomography angiography via semantic-oriented generative adversarial network","authors":"Huiyu Zhao ,&nbsp;Wangshu Zhu ,&nbsp;Luyuan Jin ,&nbsp;Yijia Xiong ,&nbsp;Xiao Deng ,&nbsp;Yuehua Li ,&nbsp;Weiwen Zou","doi":"10.1016/j.compmedimag.2025.102515","DOIUrl":"10.1016/j.compmedimag.2025.102515","url":null,"abstract":"<div><div>Calcium blooming artifact produced by calcified plaque in coronary computed tomography angiography (CCTA) is a significant contributor to false-positive results for radiologists. Most previous research focused on general noise reduction of CT images, while performance was limited when facing the blooming artifact. To address this problem, we designed an automated and robust semantics-oriented adversarial network that fully exploits the calcified plaques as semantic regions in the CCTA. The semantic features were extracted using a feature extraction module and implemented through a global–local fusion module, a generator with a semantic similarity module, and a matrix discriminator. The effectiveness of our network was validated both on a virtual and a clinical dataset. The clinical dataset consists of 372 CCTA and corresponding coronary angiogram (CAG) results, with the assistance of two cardiac radiologists (with 10 and 21 years of experience) for clinical evaluation. The proposed method effectively reduces artifacts for three major coronary arteries and significantly improves the specificity and positive predictive value for the diagnosis of coronary stenosis.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102515"},"PeriodicalIF":5.4,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative stenosis detection in X-ray coronary angiography via temporal fusion and attention-based CNN 颞叶融合和基于注意的CNN在x线冠状动脉造影术中狭窄的检测
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-02-23 DOI: 10.1016/j.compmedimag.2025.102513
Meidi Chen , Siyin Wang , Ke Liang , Xiao Chen , Zihan Xu , Chen Zhao , Weimin Yuan , Jing Wan , Qiu Huang
{"title":"Intraoperative stenosis detection in X-ray coronary angiography via temporal fusion and attention-based CNN","authors":"Meidi Chen ,&nbsp;Siyin Wang ,&nbsp;Ke Liang ,&nbsp;Xiao Chen ,&nbsp;Zihan Xu ,&nbsp;Chen Zhao ,&nbsp;Weimin Yuan ,&nbsp;Jing Wan ,&nbsp;Qiu Huang","doi":"10.1016/j.compmedimag.2025.102513","DOIUrl":"10.1016/j.compmedimag.2025.102513","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Coronary artery disease (CAD), the leading cause of mortality, is caused by atherosclerotic plaque buildup in the arteries. The gold standard for the diagnosis of CAD is via X-ray coronary angiography (XCA) during percutaneous coronary intervention, where locating coronary artery stenosis is fundamental and essential. However, due to complex vascular features and motion artifacts caused by heartbeat and respiratory movement, manually recognizing stenosis is challenging for physicians, which may prolong the surgery decision-making time and lead to irreversible myocardial damage. Therefore, we aim to provide an automatic method for accurate stenosis localization.</div></div><div><h3>Methods:</h3><div>In this work, we present a convolutional neural network (CNN) with feature-level temporal fusion and attention modules to detect coronary artery stenosis in XCA images. The temporal fusion module, composed of the deformable convolution and the correlation-based module, is proposed to integrate time-varifying vessel features from consecutive frames. The attention module adopts channel-wise recalibration to capture global context as well as spatial-wise recalibration to enhance stenosis features with local width and morphology information.</div></div><div><h3>Results:</h3><div>We compare our method to the commonly used attention methods, state-of-the-art object detection methods, and stenosis detection methods. Experimental results show that our fusion and attention strategy significantly improves performance in discerning stenosis (<span><math><mrow><mi>P</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>), achieving the best average recall score on two different datasets.</div></div><div><h3>Conclusions:</h3><div>This is the first study to integrate both temporal fusion and attention mechanism into a novel feature-level hybrid CNN framework for stenosis detection in XCA images, which is proved effective in improving detection performance and therefore is potentially helpful in intraoperative stenosis localization.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102513"},"PeriodicalIF":5.4,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143610233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TTGA U-Net: Two-stage two-stream graph attention U-Net for hepatic vessel connectivity enhancement TTGA U-Net:肝血管连通性增强的两阶段双流图注意U-Net
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-02-22 DOI: 10.1016/j.compmedimag.2025.102514
Ziqi Zhao , Wentao Li , Xiaoyi Ding , Jianqi Sun , Lisa X. Xu
{"title":"TTGA U-Net: Two-stage two-stream graph attention U-Net for hepatic vessel connectivity enhancement","authors":"Ziqi Zhao ,&nbsp;Wentao Li ,&nbsp;Xiaoyi Ding ,&nbsp;Jianqi Sun ,&nbsp;Lisa X. Xu","doi":"10.1016/j.compmedimag.2025.102514","DOIUrl":"10.1016/j.compmedimag.2025.102514","url":null,"abstract":"<div><div>Accurate segmentation of hepatic vessels is pivotal for guiding preoperative planning in ablation surgery utilizing CT images. While non-contrast CT images often lack observable vessels, we focus on segmenting hepatic vessels within preoperative MR images. However, the vascular structures depicted in MR images are susceptible to noise, leading to challenges in connectivity. To address this issue, we propose a two-stage two-stream graph attention U-Net (i.e., TTGA U-Net) for hepatic vessel segmentation. Specifically, the first-stage network employs a CNN or Transformer-based architecture to preliminarily locate the vessel position, followed by an improved superpixel segmentation method to generate graph structures based on the positioning results. The second-stage network extracts graph node features through two parallel branches of a graph spatial attention network (GAT) and a graph channel attention network (GCT), employing self-attention mechanisms to balance these features. The graph pooling operation is utilized to aggregate node information. Moreover, we introduce a feature fusion module instead of skip connections to merge the two graph attention features, providing additional information to the decoder effectively. We establish a novel well-annotated high-quality MR image dataset for hepatic vessel segmentation and validate the vessel connectivity enhancement network’s effectiveness on this dataset and the public dataset 3D IRCADB. Experimental results demonstrate that our TTGA U-Net outperforms state-of-the-art methods, notably enhancing vessel connectivity.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102514"},"PeriodicalIF":5.4,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel generative model for brain tumor detection using magnetic resonance imaging 一种新的磁共振成像脑肿瘤检测生成模型
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-02-19 DOI: 10.1016/j.compmedimag.2025.102498
José Jerovane da Costa Nascimento , Adriell Gomes Marques , Lucas do Nascimento Souza , Carlos Mauricio Jaborandy de Mattos Dourado Junior , Antonio Carlos da Silva Barros , Victor Hugo C. de Albuquerque , Luís Fabrício de Freitas Sousa
{"title":"A novel generative model for brain tumor detection using magnetic resonance imaging","authors":"José Jerovane da Costa Nascimento ,&nbsp;Adriell Gomes Marques ,&nbsp;Lucas do Nascimento Souza ,&nbsp;Carlos Mauricio Jaborandy de Mattos Dourado Junior ,&nbsp;Antonio Carlos da Silva Barros ,&nbsp;Victor Hugo C. de Albuquerque ,&nbsp;Luís Fabrício de Freitas Sousa","doi":"10.1016/j.compmedimag.2025.102498","DOIUrl":"10.1016/j.compmedimag.2025.102498","url":null,"abstract":"<div><div>Brain tumors are a disease that kills thousands of people worldwide each year. Early identification through diagnosis is essential for monitoring and treating patients. The proposed study brings a new method through intelligent computational cells that are capable of segmenting the tumor region with high precision. The method uses deep learning to detect brain tumors with the “You only look once” (Yolov8) framework, and a fine-tuning process at the end of the network layer using intelligent computational cells capable of traversing the detected region, segmenting the edges of the brain tumor. In addition, the method uses a classification pipeline that combines a set of classifiers and extractors combined with grid search, to find the best combination and the best parameters for the dataset. The method obtained satisfactory results above 98% accuracy for region detection, and above 99% for brain tumor segmentation and accuracies above 98% for binary classification of brain tumor, and segmentation time obtaining less than 1 s, surpassing the state of the art compared to the same database, demonstrating the effectiveness of the proposed method. The new approach proposes the classification of different databases through data fusion to classify the presence of tumor in MRI images, as well as the patient’s life span. The segmentation and classification steps are validated by comparing them with the literature, with comparisons between works that used the same dataset. The method addresses a new generative AI for brain tumor capable of generating a pre-diagnosis through input data through Large Language Model (LLM), and can be used in systems to aid medical imaging diagnosis. As a contribution, this study employs new detection models combined with innovative methods based on digital image processing to improve segmentation metrics, as well as the use of Data Fusion, combining two tumor datasets to enhance classification performance. The study also utilizes LLM models to refine the pre-diagnosis obtained post-classification. Thus, this study proposes a Computer-Aided Diagnosis (CAD) method through AI with PDI, CNN, and LLM.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102498"},"PeriodicalIF":5.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信