Ziying Huang , Shuangshuang Lin , Kedan Liao , Yuezhi Wang , Mei Zhang , Lixin Li , Musheng Wu , Kaixian Deng , Qing Wang
{"title":"FreqYOLO: A Uterine Disease Detection Network Based on Local and Global Frequency Feature Learning","authors":"Ziying Huang , Shuangshuang Lin , Kedan Liao , Yuezhi Wang , Mei Zhang , Lixin Li , Musheng Wu , Kaixian Deng , Qing Wang","doi":"10.1016/j.compmedimag.2025.102545","DOIUrl":"10.1016/j.compmedimag.2025.102545","url":null,"abstract":"<div><div>Leiomyomas (LM) and adenomyosis (AM) are common gynecological diseases with high incidence rates and an increasing trend of affecting younger women. Accurate detection and differentiation of LM and AM in ultrasound images are crucial for selecting appropriate treatment options. Due to the heterogeneity of these two diseases, the location, size, and number of lesions often vary significantly, posing substantial challenges for sonographers to conduct manual examinations. In this study, we propose a frequency feature learning-based detection method, FreqYOLO, for detecting LM and AM in ultrasound images. Specifically, in the dual-branch feature encoder, we introduce global and local frequency features. Subsequently, we apply a Fusion Neck to perform multi-scale fusion of the global and local features, enriching the frequency information. Finally, an improved anchor suppression method is employed to output the optimal detection anchors. The proposed FreqYOLO is compared with several state-of-the-art techniques, achieving a Recall of 0.734, Precision of 0.795, F1 score of 0.763, AP50 of 0.788, and mAP of 0.487. The results demonstrate that the FreqYOLO exhibits better detection performance of detecting and differentiating LM and AM.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102545"},"PeriodicalIF":5.4,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143807804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanxiu Zhang , XiaoLei Zhang , Yanglong He , Shizhe Zang , Hongzhi Liu , Tianyi Liu , Yudong Zhang , Yang Chen , Huazhong Shu , Jean-Louis Coatrieux , Hui Tang , Longjiang Zhang
{"title":"Coronary p-Graph: Automatic classification and localization of coronary artery stenosis from Cardiac CTA using DSA-based annotations","authors":"Yuanxiu Zhang , XiaoLei Zhang , Yanglong He , Shizhe Zang , Hongzhi Liu , Tianyi Liu , Yudong Zhang , Yang Chen , Huazhong Shu , Jean-Louis Coatrieux , Hui Tang , Longjiang Zhang","doi":"10.1016/j.compmedimag.2025.102537","DOIUrl":"10.1016/j.compmedimag.2025.102537","url":null,"abstract":"<div><div>Coronary artery disease (CAD) is a prevalent cardiovascular condition with profound health implications. Digital subtraction angiography (DSA) remains the gold standard for diagnosing vascular disease, but its invasiveness and procedural demands underscore the need for alternative diagnostic approaches. Coronary computed tomography angiography (CCTA) has emerged as a promising non-invasive method for accurately classifying and localizing coronary artery stenosis. However, the complexity of CCTA images and their dependence on manual interpretation highlight the essential role of artificial intelligence in supporting clinicians in stenosis detection.</div><div>This paper introduces a novel framework, <strong><u>Coronary</u> <u>p</u>roposal-based <u>Graph</u> Convolutional Networks (Coronary p-Graph)</strong>, designed for the automated detection of coronary stenosis from CCTA scans. The framework transforms CCTA data into curved multi-planar reformation (CMPR) images that delineate the coronary artery centerline. After aligning the CMPR volume along this centerline, the entire vasculature is analyzed using a convolutional neural network (CNN) for initial feature extraction. Based on predefined criteria informed by prior knowledge, the model generates candidate stenotic segments, termed “proposals,” which serve as graph nodes. The spatial relationships between nodes are then modeled as edges, constructing a graph representation that is processed using a graph convolutional network (GCN) for precise classification and localization of stenotic segments. <strong>All CCTA images were rigorously annotated by three expert radiologists, using DSA reports as the reference standard.</strong> This novel methodology offers diagnostic performance equivalent to invasive DSA based solely on non-invasive CCTA, potentially reducing the need for invasive procedures.</div><div>The proposed method was evaluated on a retrospective dataset comprising 259 cases, each with paired CCTA and corresponding DSA reports. Quantitative analyses demonstrated the superior performance of our approach compared to existing methods, with the following metrics: accuracy of 0.844, specificity of 0.910, area under the receiver operating characteristic curve (AUC) of 0.74, and mean absolute error (MAE) of 0.157.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102537"},"PeriodicalIF":5.4,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143785924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huihong Zhang , Bing Yang , Sanqian Li , Xiaoqing Zhang , Xiaoling Li , Tianhang Liu , Risa Higashita , Jiang Liu
{"title":"Retinal OCT image segmentation with deep learning: A review of advances, datasets, and evaluation metrics","authors":"Huihong Zhang , Bing Yang , Sanqian Li , Xiaoqing Zhang , Xiaoling Li , Tianhang Liu , Risa Higashita , Jiang Liu","doi":"10.1016/j.compmedimag.2025.102539","DOIUrl":"10.1016/j.compmedimag.2025.102539","url":null,"abstract":"<div><div>Optical coherence tomography (OCT) is a widely used imaging technology in ophthalmic clinical practice, providing non-invasive access to high-resolution retinal images. Segmentation of anatomical structures and pathological lesions in retinal OCT images, directly impacts clinical decisions. While commercial OCT devices segment multiple retinal layers in healthy eyes, their performance degrades severely under pathological conditions. In recent years, the rapid advancements in deep learning have significantly driven research in OCT image segmentation. This review provides a comprehensive overview of the latest developments in deep learning-based segmentation methods for retinal OCT images. Additionally, it summarizes the medical significance, publicly available datasets, and commonly used evaluation metrics in this field. The review also discusses the current challenges faced by the research community and highlights potential future directions.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102539"},"PeriodicalIF":5.4,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143799056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichun Sun, Alejandro Guerrero-López, Julián D. Arias-Londoño, Juan I. Godino-Llorente
{"title":"Automatic semantic segmentation of the osseous structures of the paranasal sinuses","authors":"Yichun Sun, Alejandro Guerrero-López, Julián D. Arias-Londoño, Juan I. Godino-Llorente","doi":"10.1016/j.compmedimag.2025.102541","DOIUrl":"10.1016/j.compmedimag.2025.102541","url":null,"abstract":"<div><div>Endoscopic sinus and skull base surgeries require the use of precise neuronavigation techniques, which may take advantage of accurate delimitation of surrounding structures. This delimitation is critical for robotic-assisted surgery procedures to limit volumes of no resection. In this respect, an accurate segmentation of the osseous structures of the paranasal sinuses is a relevant issue to protect critical anatomic structures during these surgeries. Currently, manual segmentation of these structures is a labour-intensive task and requires wide expertise, often leading to inconsistencies. This is due to the lack of publicly available automatic models specifically tailored for the automatic delineation of the complex osseous structures of the paranasal sinuses. To address this gap, we introduce an open source dataset and a UNet SwinTR model for the segmentation of these complex structures. The initial model was trained on nine complete ex vivo CT scans of the paranasal region and then improved with semi-supervised learning techniques. When tested on an external dataset recorded under different conditions, it achieved a DICE score of 98.25 ± 0.9. These results underscore the effectiveness of the model and its potential for broader research applications. By providing both the dataset and the model publicly available, this work aims to catalyse further research that could improve the precision of clinical interventions of endoscopic sinus and skull-based surgeries.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102541"},"PeriodicalIF":5.4,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143776404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoran Zhang , Mingzhong Pan , Chenglong Zhang , Chenyang Xu , Hongxing Qi , Dapeng Lei , Xiaopeng Ma
{"title":"ULST: U-shaped LeWin Spectral Transformer for virtual staining of pathological sections","authors":"Haoran Zhang , Mingzhong Pan , Chenglong Zhang , Chenyang Xu , Hongxing Qi , Dapeng Lei , Xiaopeng Ma","doi":"10.1016/j.compmedimag.2025.102534","DOIUrl":"10.1016/j.compmedimag.2025.102534","url":null,"abstract":"<div><div>At present, pathological section staining faces several challenges, including complex sample preparation and stringent infrastructure requirements. Virtual staining methods utilizing deep neural networks to automatically generate stained images are gaining recognition. However, most current virtual staining techniques rely on standard RGB microscopy, which lacks spatial spectral information. In contrast, hyperspectral imaging of pathological sections provides rich spatial spectral data while maintaining high resolution. To address this issue, the U-shaped Locally-enhanced Window (LeWin) Spectral Transformer (ULST) was developed to convert unstained hyperspectral microscopic images into RGB equivalents of hematoxylin and eosin (HE) stained samples. The LeWin Spectral Transformer (LST) block within ULST takes full advantage of the transformer’s attention extraction capabilities. It applies local self-attention in the spatial domain using non-overlapping windows to capture local context while significantly reducing computational complexity for high-resolution feature maps and preserving spatial features from hyperspectral images (HSI). Furthermore, the Spectral Transformer collects spectral features without losing spatial information. By integrating a multi-scale encoder-bottle-decoder structure in a U-shaped network configuration with sequential symmetric connections of LSTs, ULST performs virtual HE staining on microscopic images of unstained hyperspectral pathological sections. Qualitative and quantitative experiments show that ULST performs better than other advanced virtual staining methods in the virtual HE staining task.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102534"},"PeriodicalIF":5.4,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Yu , Hui Gao , Guang Li , Zewei Qin , Dagong Jia , Guangpu Wang , Shuo Wang
{"title":"TSNet: Vessel segmentation with sequential frame temporal information in coronary angiography","authors":"Hui Yu , Hui Gao , Guang Li , Zewei Qin , Dagong Jia , Guangpu Wang , Shuo Wang","doi":"10.1016/j.compmedimag.2025.102540","DOIUrl":"10.1016/j.compmedimag.2025.102540","url":null,"abstract":"<div><h3>Objective</h3><div>When using single-frame images for coronary vessel segmentation, the small size and complex structure of the vessels often lead to over-segmentation and mis-segmentation. Additionally, limited information from low-quality images result in disrupting the vascular topology. To address this, we introduce temporal information from coronary angiography sequences to aid in segmentation and improve accuracy.</div></div><div><h3>Methods</h3><div>We establish a dataset SqCS specialized for coronary angiography sequence segmentation and propose a time series-based coronary angiography segmentation network TSNet. Specifically, our proposed TSNet is a multi-input single-output end-to-end U-shaped network that utilizes multiple encoders for simultaneous extraction of spatial features from input sequence frames. It incorporates an edge enhancement method for segmented frames and employs the Temporal and Spatial Attention Unit (TSAU) for refined extraction of temporal and spatial information and fusion of multi-frame features. Our code is publicly available at <span><span>https://github.com/huigao-II/TSNet</span><svg><path></path></svg></span>.</div></div><div><h3>Results</h3><div>We validated TSNet on our SqCS dataset, achieving a Dice score of 0.8966, Acc of 0.9906, IoU of 0.8127, clDice of 0.9354, VCA of 1.9027, BIOU of 0.3565 and VCA of 1.9072. Conclusion: Our method enhances pixel-wise accuracy while addressing vessel discontinuities in low-contrast regions common in single-frame segmentation. It preserves vascular topology and significantly improves edge accuracy.</div></div><div><h3>Significance</h3><div>Our SqCS dataset provides a foundation for sequence-based coronary angiography vessel segmentation research. The segmentation model trained using our method lays the groundwork for accurate clinical diagnosis and treatment decisions in coronary artery disease.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102540"},"PeriodicalIF":5.4,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143776523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LA-ResUNet: Attention-based network for longitudinal liver tumor segmentation from CT images","authors":"Ri Jin , Hu-Ying Tang , Qian Yang , Wei Chen","doi":"10.1016/j.compmedimag.2025.102536","DOIUrl":"10.1016/j.compmedimag.2025.102536","url":null,"abstract":"<div><div>Longitudinal liver tumor segmentation plays a fundamental role in studying and monitoring the progression of associated diseases. The correlation and differences between longitudinal data can further improve segmentation performance, which are inevitably omitted in single-time-point segmentation. However, there is no research in this field due to the lack of relevant data. To this issue, we collect and annotate the first longitudinal liver tumor segmentation benchmark dataset. A novel strategy that utilizes images from one time point to facilitate the image segmentation from another time point of the same patient is presented. On this basis, we propose a longitudinal attention based residual U-shaped network. Within it, a channel & spatial attention module quantifies both channel-wise and spatial-wise dependencies of each feature to refine feature representations. And a longitudinal co-segmentation module captures cross-temporal correlation to recalibrate the feature at one time point according to another one for enhanced segmentation. Longitudinal segmentation is achieved by plugging these two multi-scale modules into each layer of the backbone network. Extensive experiments on our CT liver tumor dataset and an MRI brain tumor dataset have validated the effectiveness of the established strategy and the longitudinal segmentation ability of our network. Ablation studies have verified the functions of the proposed modules and their respective components.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102536"},"PeriodicalIF":5.4,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessia De Biase , Nanna Maria Sijtsema , Lisanne V. van Dijk , Roel Steenbakkers , Johannes A. Langendijk , Peter van Ooijen
{"title":"Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort","authors":"Alessia De Biase , Nanna Maria Sijtsema , Lisanne V. van Dijk , Roel Steenbakkers , Johannes A. Langendijk , Peter van Ooijen","doi":"10.1016/j.compmedimag.2025.102535","DOIUrl":"10.1016/j.compmedimag.2025.102535","url":null,"abstract":"<div><h3>Purpose</h3><div>Information on deep learning (DL) tumor segmentation accuracy on a voxel and a structure level is essential for clinical introduction. In a previous study, a DL model was developed for oropharyngeal cancer (OPC) primary tumor (PT) segmentation in PET/CT images and voxel-level predicted probabilities (TPM) quantifying model certainty were introduced. This study extended the network to simultaneously generate TPMs for PT and pathologic lymph nodes (PL) and explored whether structure-level uncertainty in TPMs predicts segmentation model accuracy in an independent external cohort.</div></div><div><h3>Methods</h3><div>We retrospectively gathered PET/CT images and manual delineations of gross tumor volume of the PT (GTVp) and PL (GTVln) of 407 OPC patients treated with (chemo)radiation in our institute. The HECKTOR 2022 challenge dataset served as external test set. The pre-existing architecture was modified for multi-label segmentation. Multiple models were trained, and the non-binarized ensemble average of TPMs was considered per patient. Segmentation accuracy was quantified by surface and aggregate DSC, model uncertainty by coefficient of variation (CV) of multiple predictions.</div></div><div><h3>Results</h3><div>Predicted GTVp and GTVln segmentations in the external test achieved 0.75 and 0.70 aggregate DSC. Patient-specific CV and surface DSC showed a significant correlation for both structures (-0.54 and −0.66 for GTVp and GTVln) in the external set, indicating significant calibration.</div></div><div><h3>Conclusion</h3><div>Significant accuracy versus uncertainty calibration was achieved for TPMs in both internal and external test sets, indicating the potential use of quantified uncertainty from TPMs to identify cases with lower GTVp and GTVln segmentation accuracy, independently of the dataset.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102535"},"PeriodicalIF":5.4,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A technology framework for distributed preoperative planning and medical training in deep brain stimulation","authors":"Qi Zhang , Roy Eagleson , Sandrine de Ribaupierre","doi":"10.1016/j.compmedimag.2025.102533","DOIUrl":"10.1016/j.compmedimag.2025.102533","url":null,"abstract":"<div><div>Deep brain stimulation (DBS) is a groundbreaking therapy for movement disorders, necessitating precise planning and extensive training to ensure accurate electrode placement in critical brain regions, such as the thalamic nuclei. This paper introduces an innovative technology framework for DBS to support distributed, real-time preoperative planning and medical training. The system integrates advanced imaging techniques, interactive graphical representation, and real-time data synchronization to assist clinicians in accurately identifying essential anatomical structures and refining pre-surgical plans. At the platform’s core are multi-volume rendering, segmentation, and virtual tool modeling algorithms that employ transparency and refinement controls to seamlessly merge and visualize different tissue types in 3D alongside their interactions with surgical tools. This method enhances visual clarity and provides a highly detailed depiction of crucial structures, ensuring the precision required for effective DBS planning. By delivering dynamic, real-time feedback, the framework supports improved decision-making and sets a new standard for collaborative DBS training and procedural preparation. The platform’s web-based synchronization architecture enhances collaboration by allowing neurologists and surgeons to simultaneously interact with visualized data from any location. This functionality supports live feedback, promotes collaborative decision-making, and streamlines procedural planning, leading to improved surgical outcomes. Performance evaluations across various hardware configurations and web browsers demonstrate the platform’s high rendering speed and low-latency data synchronization, ensuring responsive and reliable interactions essential for clinical use. Its adaptability makes it suitable for medical training, preoperative planning, and intraoperative support, accommodating a wide range of hardware setups and web environments to address the specific demands of DBS-related procedures. This research lays a robust foundation for advancing distributed clinical planning, comprehensive medical education, and improved patient care in neurostimulation therapies.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102533"},"PeriodicalIF":5.4,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143715047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Yang , Dongxue Li , Liwei Deng , Sijuan Huang , Jing Wang
{"title":"TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration","authors":"Xin Yang , Dongxue Li , Liwei Deng , Sijuan Huang , Jing Wang","doi":"10.1016/j.compmedimag.2025.102527","DOIUrl":"10.1016/j.compmedimag.2025.102527","url":null,"abstract":"<div><div>Medical image registration is a critical task in aligning medical images from different time points, modalities, or individuals, essential for accurate diagnosis and treatment planning. Despite significant progress in deep learning-based registration methods, current approaches still face considerable challenges, such as insufficient capture of local details, difficulty in effectively modeling global contextual information, and limited robustness in handling complex deformations. These limitations hinder the precision of high-resolution registration, particularly when dealing with medical images with intricate structures. To address these issues, this paper presents a novel registration network (TCDE-Net), an unsupervised medical image registration method based on a dual-encoder architecture. The dual encoders complement each other in feature extraction, enabling the model to effectively handle large-scale nonlinear deformations and capture intricate local details, thereby enhancing registration accuracy. Additionally, the detail-enhancement attention module aids in restoring fine-grained features, improving the network's capability to address complex deformations such as those at gray-white matter boundaries. Experimental results on the OASIS, IXI, and Hammers-n30r95 3D brain MR dataset demonstrate that this method outperforms commonly used registration techniques across multiple evaluation metrics, achieving superior performance and robustness. Our code is available at <span><span>https://github.com/muzidongxue/TCDE-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102527"},"PeriodicalIF":5.4,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}