Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
ULST: U-shaped LeWin Spectral Transformer for virtual staining of pathological sections
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-28 DOI: 10.1016/j.compmedimag.2025.102534
Haoran Zhang , Mingzhong Pan , Chenglong Zhang , Chenyang Xu , Hongxing Qi , Dapeng Lei , Xiaopeng Ma
{"title":"ULST: U-shaped LeWin Spectral Transformer for virtual staining of pathological sections","authors":"Haoran Zhang ,&nbsp;Mingzhong Pan ,&nbsp;Chenglong Zhang ,&nbsp;Chenyang Xu ,&nbsp;Hongxing Qi ,&nbsp;Dapeng Lei ,&nbsp;Xiaopeng Ma","doi":"10.1016/j.compmedimag.2025.102534","DOIUrl":"10.1016/j.compmedimag.2025.102534","url":null,"abstract":"<div><div>At present, pathological section staining faces several challenges, including complex sample preparation and stringent infrastructure requirements. Virtual staining methods utilizing deep neural networks to automatically generate stained images are gaining recognition. However, most current virtual staining techniques rely on standard RGB microscopy, which lacks spatial spectral information. In contrast, hyperspectral imaging of pathological sections provides rich spatial spectral data while maintaining high resolution. To address this issue, the U-shaped Locally-enhanced Window (LeWin) Spectral Transformer (ULST) was developed to convert unstained hyperspectral microscopic images into RGB equivalents of hematoxylin and eosin (HE) stained samples. The LeWin Spectral Transformer (LST) block within ULST takes full advantage of the transformer’s attention extraction capabilities. It applies local self-attention in the spatial domain using non-overlapping windows to capture local context while significantly reducing computational complexity for high-resolution feature maps and preserving spatial features from hyperspectral images (HSI). Furthermore, the Spectral Transformer collects spectral features without losing spatial information. By integrating a multi-scale encoder-bottle-decoder structure in a U-shaped network configuration with sequential symmetric connections of LSTs, ULST performs virtual HE staining on microscopic images of unstained hyperspectral pathological sections. Qualitative and quantitative experiments show that ULST performs better than other advanced virtual staining methods in the virtual HE staining task.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102534"},"PeriodicalIF":5.4,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LA-ResUNet: Attention-based network for longitudinal liver tumor segmentation from CT images
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-27 DOI: 10.1016/j.compmedimag.2025.102536
Ri Jin , Hu-Ying Tang , Qian Yang , Wei Chen
{"title":"LA-ResUNet: Attention-based network for longitudinal liver tumor segmentation from CT images","authors":"Ri Jin ,&nbsp;Hu-Ying Tang ,&nbsp;Qian Yang ,&nbsp;Wei Chen","doi":"10.1016/j.compmedimag.2025.102536","DOIUrl":"10.1016/j.compmedimag.2025.102536","url":null,"abstract":"<div><div>Longitudinal liver tumor segmentation plays a fundamental role in studying and monitoring the progression of associated diseases. The correlation and differences between longitudinal data can further improve segmentation performance, which are inevitably omitted in single-time-point segmentation. However, there is no research in this field due to the lack of relevant data. To this issue, we collect and annotate the first longitudinal liver tumor segmentation benchmark dataset. A novel strategy that utilizes images from one time point to facilitate the image segmentation from another time point of the same patient is presented. On this basis, we propose a longitudinal attention based residual U-shaped network. Within it, a channel &amp; spatial attention module quantifies both channel-wise and spatial-wise dependencies of each feature to refine feature representations. And a longitudinal co-segmentation module captures cross-temporal correlation to recalibrate the feature at one time point according to another one for enhanced segmentation. Longitudinal segmentation is achieved by plugging these two multi-scale modules into each layer of the backbone network. Extensive experiments on our CT liver tumor dataset and an MRI brain tumor dataset have validated the effectiveness of the established strategy and the longitudinal segmentation ability of our network. Ablation studies have verified the functions of the proposed modules and their respective components.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102536"},"PeriodicalIF":5.4,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-25 DOI: 10.1016/j.compmedimag.2025.102535
Alessia De Biase , Nanna Maria Sijtsema , Lisanne V. van Dijk , Roel Steenbakkers , Johannes A. Langendijk , Peter van Ooijen
{"title":"Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort","authors":"Alessia De Biase ,&nbsp;Nanna Maria Sijtsema ,&nbsp;Lisanne V. van Dijk ,&nbsp;Roel Steenbakkers ,&nbsp;Johannes A. Langendijk ,&nbsp;Peter van Ooijen","doi":"10.1016/j.compmedimag.2025.102535","DOIUrl":"10.1016/j.compmedimag.2025.102535","url":null,"abstract":"<div><h3>Purpose</h3><div>Information on deep learning (DL) tumor segmentation accuracy on a voxel and a structure level is essential for clinical introduction. In a previous study, a DL model was developed for oropharyngeal cancer (OPC) primary tumor (PT) segmentation in PET/CT images and voxel-level predicted probabilities (TPM) quantifying model certainty were introduced. This study extended the network to simultaneously generate TPMs for PT and pathologic lymph nodes (PL) and explored whether structure-level uncertainty in TPMs predicts segmentation model accuracy in an independent external cohort.</div></div><div><h3>Methods</h3><div>We retrospectively gathered PET/CT images and manual delineations of gross tumor volume of the PT (GTVp) and PL (GTVln) of 407 OPC patients treated with (chemo)radiation in our institute. The HECKTOR 2022 challenge dataset served as external test set. The pre-existing architecture was modified for multi-label segmentation. Multiple models were trained, and the non-binarized ensemble average of TPMs was considered per patient. Segmentation accuracy was quantified by surface and aggregate DSC, model uncertainty by coefficient of variation (CV) of multiple predictions.</div></div><div><h3>Results</h3><div>Predicted GTVp and GTVln segmentations in the external test achieved 0.75 and 0.70 aggregate DSC. Patient-specific CV and surface DSC showed a significant correlation for both structures (-0.54 and −0.66 for GTVp and GTVln) in the external set, indicating significant calibration.</div></div><div><h3>Conclusion</h3><div>Significant accuracy versus uncertainty calibration was achieved for TPMs in both internal and external test sets, indicating the potential use of quantified uncertainty from TPMs to identify cases with lower GTVp and GTVln segmentation accuracy, independently of the dataset.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102535"},"PeriodicalIF":5.4,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A technology framework for distributed preoperative planning and medical training in deep brain stimulation
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-24 DOI: 10.1016/j.compmedimag.2025.102533
Qi Zhang , Roy Eagleson , Sandrine de Ribaupierre
{"title":"A technology framework for distributed preoperative planning and medical training in deep brain stimulation","authors":"Qi Zhang ,&nbsp;Roy Eagleson ,&nbsp;Sandrine de Ribaupierre","doi":"10.1016/j.compmedimag.2025.102533","DOIUrl":"10.1016/j.compmedimag.2025.102533","url":null,"abstract":"<div><div>Deep brain stimulation (DBS) is a groundbreaking therapy for movement disorders, necessitating precise planning and extensive training to ensure accurate electrode placement in critical brain regions, such as the thalamic nuclei. This paper introduces an innovative technology framework for DBS to support distributed, real-time preoperative planning and medical training. The system integrates advanced imaging techniques, interactive graphical representation, and real-time data synchronization to assist clinicians in accurately identifying essential anatomical structures and refining pre-surgical plans. At the platform’s core are multi-volume rendering, segmentation, and virtual tool modeling algorithms that employ transparency and refinement controls to seamlessly merge and visualize different tissue types in 3D alongside their interactions with surgical tools. This method enhances visual clarity and provides a highly detailed depiction of crucial structures, ensuring the precision required for effective DBS planning. By delivering dynamic, real-time feedback, the framework supports improved decision-making and sets a new standard for collaborative DBS training and procedural preparation. The platform’s web-based synchronization architecture enhances collaboration by allowing neurologists and surgeons to simultaneously interact with visualized data from any location. This functionality supports live feedback, promotes collaborative decision-making, and streamlines procedural planning, leading to improved surgical outcomes. Performance evaluations across various hardware configurations and web browsers demonstrate the platform’s high rendering speed and low-latency data synchronization, ensuring responsive and reliable interactions essential for clinical use. Its adaptability makes it suitable for medical training, preoperative planning, and intraoperative support, accommodating a wide range of hardware setups and web environments to address the specific demands of DBS-related procedures. This research lays a robust foundation for advancing distributed clinical planning, comprehensive medical education, and improved patient care in neurostimulation therapies.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102533"},"PeriodicalIF":5.4,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143715047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-23 DOI: 10.1016/j.compmedimag.2025.102527
Xin Yang , Dongxue Li , Liwei Deng , Sijuan Huang , Jing Wang
{"title":"TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration","authors":"Xin Yang ,&nbsp;Dongxue Li ,&nbsp;Liwei Deng ,&nbsp;Sijuan Huang ,&nbsp;Jing Wang","doi":"10.1016/j.compmedimag.2025.102527","DOIUrl":"10.1016/j.compmedimag.2025.102527","url":null,"abstract":"<div><div>Medical image registration is a critical task in aligning medical images from different time points, modalities, or individuals, essential for accurate diagnosis and treatment planning. Despite significant progress in deep learning-based registration methods, current approaches still face considerable challenges, such as insufficient capture of local details, difficulty in effectively modeling global contextual information, and limited robustness in handling complex deformations. These limitations hinder the precision of high-resolution registration, particularly when dealing with medical images with intricate structures. To address these issues, this paper presents a novel registration network (TCDE-Net), an unsupervised medical image registration method based on a dual-encoder architecture. The dual encoders complement each other in feature extraction, enabling the model to effectively handle large-scale nonlinear deformations and capture intricate local details, thereby enhancing registration accuracy. Additionally, the detail-enhancement attention module aids in restoring fine-grained features, improving the network's capability to address complex deformations such as those at gray-white matter boundaries. Experimental results on the OASIS, IXI, and Hammers-n30r95 3D brain MR dataset demonstrate that this method outperforms commonly used registration techniques across multiple evaluation metrics, achieving superior performance and robustness. Our code is available at <span><span>https://github.com/muzidongxue/TCDE-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102527"},"PeriodicalIF":5.4,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class balancing diversity multimodal ensemble for Alzheimer’s disease diagnosis and early detection
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-22 DOI: 10.1016/j.compmedimag.2025.102529
Arianna Francesconi , Lazzaro di Biase , Donato Cappetta , Fabio Rebecchi , Paolo Soda , Rosa Sicilia , Valerio Guarrasi , Alzheimer’s Disease Neuroimaging Initiative
{"title":"Class balancing diversity multimodal ensemble for Alzheimer’s disease diagnosis and early detection","authors":"Arianna Francesconi ,&nbsp;Lazzaro di Biase ,&nbsp;Donato Cappetta ,&nbsp;Fabio Rebecchi ,&nbsp;Paolo Soda ,&nbsp;Rosa Sicilia ,&nbsp;Valerio Guarrasi ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.compmedimag.2025.102529","DOIUrl":"10.1016/j.compmedimag.2025.102529","url":null,"abstract":"<div><div>Alzheimer’s disease (AD) poses significant global health challenges due to its increasing prevalence and associated societal costs. Early detection and diagnosis of AD are critical for delaying progression and improving patient outcomes. Traditional diagnostic methods and single-modality data often fall short in identifying early-stage AD and distinguishing it from Mild Cognitive Impairment (MCI). This study addresses these challenges by introducing a novel approach: multImodal enseMble via class BALancing diversity for iMbalancEd Data (IMBALMED). IMBALMED integrates multimodal data from the Alzheimer’s Disease Neuroimaging Initiative database, including clinical assessments, neuroimaging phenotypes, biospecimen, and subject characteristics data. It employs a new ensemble of model classifiers, designed specifically for this framework, which combines eight distinct families of learning paradigms trained with diverse class balancing techniques to overcome class imbalance and enhance model accuracy. We evaluate IMBALMED on two diagnostic tasks (binary and ternary classification) and four binary early detection tasks (at 12, 24, 36, and 48 months), comparing its performance with state-of-the-art algorithms and an unbalanced dataset method. To further validate the proposed model and ensure genuine generalization to real-world scenarios, we conducted an external validation experiment using data from the most recent phase of the ADNI dataset. IMBALMED demonstrates superior diagnostic accuracy and predictive performance in both binary and ternary classification tasks, significantly improving early detection of MCI at a 48-month time point and showing excellent generalizability in the 12-month task during external validation. The method shows improved classification performance and robustness, offering a promising solution for early detection and management of AD.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102529"},"PeriodicalIF":5.4,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpineMamba: Enhancing 3D spinal segmentation in clinical imaging through residual visual Mamba layers and shape priors
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-22 DOI: 10.1016/j.compmedimag.2025.102531
Zhiqing Zhang , Tianyong Liu , Guojia Fan , Na Li , Bin Li , Yao Pu , Qianjin Feng , Shoujun Zhou
{"title":"SpineMamba: Enhancing 3D spinal segmentation in clinical imaging through residual visual Mamba layers and shape priors","authors":"Zhiqing Zhang ,&nbsp;Tianyong Liu ,&nbsp;Guojia Fan ,&nbsp;Na Li ,&nbsp;Bin Li ,&nbsp;Yao Pu ,&nbsp;Qianjin Feng ,&nbsp;Shoujun Zhou","doi":"10.1016/j.compmedimag.2025.102531","DOIUrl":"10.1016/j.compmedimag.2025.102531","url":null,"abstract":"<div><div>Accurate segmentation of three-dimensional (3D) clinical medical images is critical for the diagnosis and treatment of spinal diseases. However, the complexity of spinal anatomy and the inherent uncertainties of current imaging technologies pose significant challenges for the semantic segmentation of spinal images. Although convolutional neural networks (CNNs) and Transformer-based models have achieved remarkable progress in spinal segmentation, their limitations in modeling long-range dependencies hinder further improvements in segmentation accuracy. To address these challenges, we propose a novel framework, SpineMamba, which incorporates a residual visual Mamba layer capable of effectively capturing and modeling the deep semantic features and long-range spatial dependencies in 3D spinal data. To further enhance the structural semantic understanding of the vertebrae, we also propose a novel spinal shape prior module that captures specific anatomical information about the spine from medical images, significantly enhancing the model’s ability to extract structural semantic information of the vertebrae. Extensive comparative and ablation experiments across three datasets demonstrate that SpineMamba outperforms existing state-of-the-art models. On two computed tomography (CT) datasets, the average Dice similarity coefficients achieved are 94.40±4% and 88.28±3%, respectively, while on a magnetic resonance (MR) dataset, the model achieves a Dice score of 86.95±10%. Notably, SpineMamba surpasses the widely recognized nnU-Net in segmentation accuracy, with a maximum improvement of 3.63 percentage points. These results highlight the precision, robustness, and exceptional generalization capability of SpineMamba.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102531"},"PeriodicalIF":5.4,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal MRI synthesis with conditional latent diffusion models for data augmentation in tumor segmentation
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-21 DOI: 10.1016/j.compmedimag.2025.102532
Aghiles Kebaili , Jérôme Lapuyade-Lahorgue , Pierre Vera , Su Ruan
{"title":"Multi-modal MRI synthesis with conditional latent diffusion models for data augmentation in tumor segmentation","authors":"Aghiles Kebaili ,&nbsp;Jérôme Lapuyade-Lahorgue ,&nbsp;Pierre Vera ,&nbsp;Su Ruan","doi":"10.1016/j.compmedimag.2025.102532","DOIUrl":"10.1016/j.compmedimag.2025.102532","url":null,"abstract":"<div><div>Multimodality is often necessary for improving object segmentation tasks, especially in the case of multilabel tasks, such as tumor segmentation, which is crucial for clinical diagnosis and treatment planning. However, a major challenge in utilizing multimodality with deep learning remains: the limited availability of annotated training data, primarily due to the time-consuming acquisition process and the necessity for expert annotations. Although deep learning has significantly advanced many tasks in medical imaging, conventional augmentation techniques are often insufficient due to the inherent complexity of volumetric medical data. To address this problem, we propose an innovative slice-based latent diffusion architecture for the generation of 3D multi-modal images and their corresponding multi-label masks. Our approach enables the simultaneous generation of the image and mask in a slice-by-slice fashion, leveraging a positional encoding and a Latent Aggregation module to maintain spatial coherence and capture slice sequentiality. This method effectively reduces the computational complexity and memory demands typically associated with diffusion models. Additionally, we condition our architecture on tumor characteristics to generate a diverse array of tumor variations and enhance texture using a refining module that acts like a super-resolution mechanism, mitigating the inherent blurriness caused by data scarcity in the autoencoder. We evaluate the effectiveness of our synthesized volumes using the BRATS2021 dataset to segment the tumor with three tissue labels and compare them with other state-of-the-art diffusion models through a downstream segmentation task, demonstrating the superior performance and efficiency of our method. While our primary application is tumor segmentation, this method can be readily adapted to other modalities. Code is available here : <span><span>https://github.com/Arksyd96/multi-modal-mri-and-mask-synthesis-with-conditional-slice-based-ldm</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102532"},"PeriodicalIF":5.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143687870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Guess acceleration for explainable image reconstruction in sparse-view CT
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-21 DOI: 10.1016/j.compmedimag.2025.102530
Elena Loli Piccolomini , Davide Evangelista , Elena Morotti
{"title":"Deep Guess acceleration for explainable image reconstruction in sparse-view CT","authors":"Elena Loli Piccolomini ,&nbsp;Davide Evangelista ,&nbsp;Elena Morotti","doi":"10.1016/j.compmedimag.2025.102530","DOIUrl":"10.1016/j.compmedimag.2025.102530","url":null,"abstract":"<div><div>Sparse-view Computed Tomography (CT) is an emerging protocol designed to reduce X-ray dose radiation in medical imaging. Reconstructions based on the traditional Filtered Back Projection algorithm suffer from severe artifacts due to sparse data. In contrast, Model-Based Iterative Reconstruction (MBIR) algorithms, though better at mitigating noise through regularization, are too computationally costly for clinical use. This paper introduces a novel technique, denoted as the Deep Guess acceleration scheme, using a trained neural network both to quicken the regularized MBIR and to enhance the reconstruction accuracy. We integrate state-of-the-art deep learning tools to initialize a clever starting guess for a proximal algorithm solving a non-convex model and thus computing a (mathematically) interpretable solution image in a few iterations. Experimental results on real and synthetic CT images demonstrate the Deep Guess effectiveness in (very) sparse tomographic protocols, where it overcomes its mere variational counterpart and many data-driven approaches at the state of the art. We also consider a ground truth-free implementation and test the robustness of the proposed framework to noise.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102530"},"PeriodicalIF":5.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic colon segmentation on T1-FS MR images
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-03-17 DOI: 10.1016/j.compmedimag.2025.102528
Bernat Orellana , Isabel Navazo , Pere Brunet , Eva Monclús , Álvaro Bendezú , Fernando Azpiroz
{"title":"Automatic colon segmentation on T1-FS MR images","authors":"Bernat Orellana ,&nbsp;Isabel Navazo ,&nbsp;Pere Brunet ,&nbsp;Eva Monclús ,&nbsp;Álvaro Bendezú ,&nbsp;Fernando Azpiroz","doi":"10.1016/j.compmedimag.2025.102528","DOIUrl":"10.1016/j.compmedimag.2025.102528","url":null,"abstract":"<div><div>The volume and distribution of the colonic contents provides valuable insights into the effects of diet on gut microbiotica involving both clinical diagnosis and research. In terms of Magnetic Resonance Imaging modalities, T2-weighted images allow the segmentation of the colon lumen, while fecal and gas contents can be only distinguished on the T1-weighted Fat-Sat modality. However, the manual segmentation of T1-weighted Fat-Sat is challenging, and no automatic segmentation methods are known.</div><div>This paper proposed a non-supervised algorithm providing an accurate T1-weighted Fat-Sat colon segmentation via the registration of an existing colon segmentation in T2-weighted modality.</div><div>The algorithm consists of two phases. It starts with a registration process based on a classical deformable registration method, followed by a novel Iterative Colon Registration process that utilizes a mesh deformation approach. This approach is guided by a probabilistic model that provides the likelihood of the colon boundary, followed by a shape preservation process of the colon segmentation on T2-weighted images. The iterative process converges to achieve an optimal fit for colon segmentation in T1-weighted Fat-Sat images.</div><div>The segmentation algorithm has been tested on multiple datasets (154 scans) and acquisition machines (3) as part of the proof of concept for the proposed methodology. The quantitative evaluation was based on two metrics: the percentage of ground truth labeled feces correctly identified by our proposal (<span><math><mrow><mn>93</mn><mo>±</mo><mn>5</mn><mtext>%</mtext></mrow></math></span>), and the volume variation between the existing colon segmentation in the T2-weighted modality and the colon segmentation computed in T1-weighted Fat-Sat images.</div><div>Quantitative and medical evaluations demonstrated a degree of accuracy, usability, and stability concerning the acquisition hardware, making the algorithm suitable for clinical application and research.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102528"},"PeriodicalIF":5.4,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143670163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信