{"title":"IgCONDA-PET: Weakly-supervised PET anomaly detection using implicitly-guided attention-conditional counterfactual diffusion modeling — a multi-center, multi-cancer, and multi-tracer study","authors":"Shadab Ahamed , Arman Rahmim","doi":"10.1016/j.compmedimag.2025.102615","DOIUrl":"10.1016/j.compmedimag.2025.102615","url":null,"abstract":"<div><div>Minimizing the need for pixel-level annotated data to train PET lesion detection and segmentation networks is highly desired and can be transformative, given time and cost constraints associated with expert annotations. Current unsupervised or weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks (GANs) trained only on healthy data. While these approaches reduce annotation dependency, GAN-based methods are notably more challenging to train than non-GAN alternatives (such as autoencoders) due to issues such as the simultaneous optimization of two competing networks, mode collapse, and training instability. In this paper, we present the weakly-supervised <strong>I</strong>mplicitly <strong>g</strong>uided <strong>CO</strong>u<strong>N</strong>terfactual diffusion model for <strong>D</strong>etecting <strong>A</strong>nomalies in <strong>PET</strong> images (IgCONDA-PET). The solution is developed and validated using PET scans from six retrospective cohorts consisting of a total of 2652 cases (multi-cancer, multi-tracer) containing both local and public datasets (spanning multiple centers). The training is conditioned on image class labels (healthy vs. unhealthy) via attention modules, and we employ implicit diffusion guidance. We perform counterfactual generation which facilitates “unhealthy-to-healthy” domain translation by generating a synthetic, healthy version of an unhealthy input image, enabling the detection of anomalies through the calculated differences. The performance of our method was compared against several other deep learning based weakly-supervised or unsupervised methods as well as traditional methods like 41% SUV<span><math><msub><mrow></mrow><mrow><mtext>max</mtext></mrow></msub></math></span> thresholding. We also highlight the importance of incorporating attention modules in our network for the detection of small anomalies. The code is publicly available at: <span><span>https://github.com/ahxmeds/IgCONDA-PET.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102615"},"PeriodicalIF":4.9,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaozhong Xue , Weiwei Du , Qinghua Hu , Masahiro Miyake , Keina Sado
{"title":"Fluid-SegNet: Multi-dimensional loss-driven Y-Net with dilated convolutions for OCT B-scan fluid segmentation","authors":"Xiaozhong Xue , Weiwei Du , Qinghua Hu , Masahiro Miyake , Keina Sado","doi":"10.1016/j.compmedimag.2025.102613","DOIUrl":"10.1016/j.compmedimag.2025.102613","url":null,"abstract":"<div><div>Optical Coherence Tomography (OCT) is a widely utilized imaging modality in clinical ophthalmology, particularly for retinal imaging. B-scan is a two-dimensional slice of the OCT volume. It enables high-resolution cross-sectional visualization of retinal layers, facilitating the analysis of retinal structure and the detection of pathological features such as fluid regions. Accurate segmentation of these fluid regions is crucial not only for determining appropriate treatment dosages but also serves as a foundation for the development of automated retinal disease diagnosis systems and visual acuity prediction models. However, the segmentation of fluid regions from OCT B-scans poses two major challenges: (1) the difficulty in delineating fine details and small fluid regions, and (2) the heterogeneity of fluid regions, which often leads to under-segmentation. This study introduces Fluid-SegNet, a novel deep learning-based segmentation framework designed to enhance the accuracy of fluid region segmentation in OCT B-scans. The proposed method is evaluated on three public datasets, UMN, AROI, and OIMHS. achieving mean Dice of 0.8725, 0.6967, and 0.8020, respectively. These results highlight the effectiveness and robustness of Fluid-SegNet in segmenting fluid regions across varied datasets and imaging conditions. Compared to existing methods, Fluid-SegNet effectively addresses the two aforementioned challenges. The source code for Fluid-SegNet is publicly available at: <span><span>https://github.com/xuexiaozhong/Fluid-SegNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102613"},"PeriodicalIF":4.9,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SNA-SKAN: Unpaired learning for SDOCT speckle noise removal based on self noise assist and kolmogorov-arnold network","authors":"Zhencun Jiang , Kangrui Ren , Zixiong Hao , Zhongjie Wang","doi":"10.1016/j.compmedimag.2025.102596","DOIUrl":"10.1016/j.compmedimag.2025.102596","url":null,"abstract":"<div><div>Optical Coherence Tomography (OCT) will inevitably be contaminated by speckle noise when imaging, resulting in a decrease in the visual quality of images and affecting clinical diagnosis. Existing unsupervised denoising methods often rely on complex model architectures or extensive data preprocessing. This paper proposes an unpaired Spectral-Domain OCT (SDOCT) denoising framework named SNA-SKAN. The Self Noise Assist (SNA) module leverages wavelet transform and singular value decomposition to extract noise components directly from noisy OCT images. These components are then fused into a new noise representation, which guides the neural network in effectively learning speckle noise patterns. Furthermore, to more effectively model speckle noise in OCT images, this paper exploits the Kolmogorov-Arnold Network (KAN) for its superior capacity to represent complex distributions, and proposes a KAN-based speckle noise generation network (SKAN). The SNA-SKAN framework is built upon the Generative Adversarial Network (GAN) architecture, employing a single generator and a single discriminator. Extensive experiments conducted on an unpaired public dataset for training and two public datasets for evaluation demonstrate that the proposed method outperforms existing unsupervised methods and state-of-the-art unpaired methods, in terms of denoising capability and detail preservation. SNA-SKAN can achieve efficient OCT denoising while preserving edges and details, demonstrating strong potential to meet clinical needs. The code is publicly available at: <span><span>https://github.com/zhencunjiang/SNA-SKAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102596"},"PeriodicalIF":5.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144703086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiyong Huang , Shiyao Zhou , Zhi Yu , Mingyang Hou , Zhiyu Zhao , Xiaoyu Li , Jiahong Wang , Yan Yan , Yushi Liu , Hans Gregersen
{"title":"Multi-scale interaction and locally enhanced bridging network for medical image segmentation","authors":"Zhiyong Huang , Shiyao Zhou , Zhi Yu , Mingyang Hou , Zhiyu Zhao , Xiaoyu Li , Jiahong Wang , Yan Yan , Yushi Liu , Hans Gregersen","doi":"10.1016/j.compmedimag.2025.102610","DOIUrl":"10.1016/j.compmedimag.2025.102610","url":null,"abstract":"<div><div>Accurate organ segmentation is crucial for precise medical diagnosis. Recent methods in CNNs and Transformers have significantly enhanced automatic medical image segmentation. Their encoders and decoders often rely on simple skip connections, which fail to effectively integrate multi-scale features. This causes a misalignment between low-resolution global features and high-resolution spatial information. As a result, segmentation accuracy suffers, particularly in global contours and local details. To address this limitation, MILENet, a multi-scale interaction and locally enhanced bridging network, is proposed. The proposed context bridge incorporates a multi-scale interaction module to reorganize multi-scale features and ensure global correlation. Additionally, a local enhancement module is introduced. It includes a dilated coordinate attention mechanism and a locally enhanced FFN built with a cascaded convolutional structure. This module enhances local context modeling and improves feature discrimination. Furthermore, a source-driven connection mechanism is introduced to preserve detailed information across layers, providing richer features for decoder reconstruction. By leveraging these innovations, MILENet effectively aligns multi-scale features and enhances local details, thereby improving segmentation accuracy. MILENet has been evaluated on publicly available datasets spanning abdominal CT (Synapse), cardiac MRI (ACDC), and colonoscopy RGB images (Kvasir, CVC-ClinicDB, CVC-ColonDB, CVC-300, and ETIS-LaribDB). The results show that MILENet achieves state-of-the-art performance across different modalities. It effectively handles both large-organ segmentation in CT/MRI and fine-grained polyp delineation in endoscopic images, demonstrating strong generalizability to diverse anatomical structures and imaging conditions. The code has been released on GitHub: <span><span>https://github.com/syzhou1226/MILENET</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102610"},"PeriodicalIF":5.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruize Cui , Lanqing Liu , Jing Zou , Xiaowei Hu , Jialun Pei , Jing Qin
{"title":"Taming large vision model for medical image segmentation via Dual Visual Prompt Tuning","authors":"Ruize Cui , Lanqing Liu , Jing Zou , Xiaowei Hu , Jialun Pei , Jing Qin","doi":"10.1016/j.compmedimag.2025.102608","DOIUrl":"10.1016/j.compmedimag.2025.102608","url":null,"abstract":"<div><div>This paper presents Dual Visual Prompt Tuning (DVPT), an innovative strategy to enhance the performance of the Segment Anything Model (SAM) for medical image segmentation. While SAM demonstrates robust generalization in natural image segmentation, its effectiveness in medical tasks is hindered by the distinct characteristics of medical targets, the presence of noise and artifacts, and insufficient task-specific data for fine-tuning. Moreover, the manual-prompting paradigm applied in SAM make it laborious when adapted to medical domain. To address these challenges, DVPT employs an fully automatic prompting paradigm and assembles both image-specific local and global guidance into SAM through two components: the <em>Local Feature Prompt Tuning (LFPT)</em> module, which enhances local information capture of detailed anatomical structures, and the <em>Global Guiding Prompt (GGP)</em> encoder, which mitigates noise interference and strengthens the identification of ambiguous boundaries within medical images. By integrating both local and global prompts within the mask decoder, the proposed DVPT yields superior segmentation accuracy. Experimental results across three medical image segmentation tasks consistently demonstrate that our method outperforms current state-of-the-art approaches. Our method significantly contributes to accurate and impactful computer-assisted diagnostics, promoting advancements in healthcare solutions. Our code can be available at <span><span>https://github.com/cuiruize/DVPT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102608"},"PeriodicalIF":5.4,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Wang , Jiayu Xia , Gongning Luo , Suyu Dong , Xiangyu Li , Jie Wen , Shuo Li
{"title":"Diffusion model for medical image denoising, reconstruction and translation","authors":"Wei Wang , Jiayu Xia , Gongning Luo , Suyu Dong , Xiangyu Li , Jie Wen , Shuo Li","doi":"10.1016/j.compmedimag.2025.102593","DOIUrl":"10.1016/j.compmedimag.2025.102593","url":null,"abstract":"<div><div>Diffusion models, as a class of generative models, have demonstrated significant performance in image generation since their inception. The fundamental principle behind diffusion models is the definition of a forward process and a reverse process. The input data is progressively perturbed by adding random noise during the forward process, and the expected noise distribution is learned. In the reverse process, noise is gradually reduced from a Gaussian distribution to generate the image. Recently, diffusion models have been widely adopted in various image processing tasks, including text-to-image synthesis, denoising, segmentation, and object detection. In medical image analysis, diffusion models have shown considerable potential for improving diagnostic accuracy and image quality. This article provides a comprehensive overview of diffusion models, particularly their applications in medical image denoising, reconstruction, and translation. Specifically, we categorize diffusion models into two types: denoising diffusion probabilistic models and score-based models and introduce the solid theoretical foundations and fundamental concepts underlying these models. Additionally, we introduce publicly available datasets and evaluation metrics relevant to these methods. Most importantly, we provide detailed introductions to several representative articles, summarize current applications of diffusion models in these domains, and discuss potential future directions for development and challenges.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102593"},"PeriodicalIF":4.9,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haitao Gan , Liang Liu , Furong Wang , Zhi Yang , Zhongwei Huang , Ran Zhou
{"title":"SML-Net: Semi-supervised multi-task learning network for carotid plaque segmentation and classification","authors":"Haitao Gan , Liang Liu , Furong Wang , Zhi Yang , Zhongwei Huang , Ran Zhou","doi":"10.1016/j.compmedimag.2025.102607","DOIUrl":"10.1016/j.compmedimag.2025.102607","url":null,"abstract":"<div><div>Carotid ultrasound image segmentation and classification are crucial in assessing the severity of carotid plaques which serve as a major cause of ischemic stroke. Although many methods are employed for carotid plaque segmentation and classification, treating these tasks separately neglects their interrelatedness. Currently, there is limited research exploring the key information of both plaque and background regions, and collecting and annotating extensive segmentation data is a costly and time-intensive task. To address these two issues, we propose an end-to-end semi-supervised multi-task learning network(SML-Net), which can classify plaques while performing segmentation. SML-Net identifies regions by extracting image features and fuses multi-scale features to improve semi-supervised segmentation. SML-Net effectively utilizes plaque and background regions from the segmentation results and extracts features from various dimensions, thereby facilitating the classification task. Our experimental results indicate that SML-Net achieves a plaque classification accuracy of 86.59% and a Dice Similarity Coefficient (DSC) of 82.36%. Compared to the leading single-task network, SML-Net improves DSC by 1.2% and accuracy by 1.84%. Similarly, when compared to the best-performing multi-task network, our method achieves a 1.05% increase in DSC and a 2.15% improvement in classification accuracy.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102607"},"PeriodicalIF":5.4,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaoqin Wang , Wenting Xie , Chenxin Li , Qing Xu , Zhongshi Du , Zhaoming Zhong , Lina Tang
{"title":"Automated microvascular invasion prediction of hepatocellular carcinoma via deep relation reasoning from dynamic contrast-enhanced ultrasound","authors":"Yaoqin Wang , Wenting Xie , Chenxin Li , Qing Xu , Zhongshi Du , Zhaoming Zhong , Lina Tang","doi":"10.1016/j.compmedimag.2025.102606","DOIUrl":"10.1016/j.compmedimag.2025.102606","url":null,"abstract":"<div><div>Hepatocellular carcinoma (HCC) is a major global health concern, with microvascular invasion (MVI) being a critical prognostic factor linked to early recurrence and poor survival. Preoperative MVI prediction remains challenging, but recent advancements in dynamic contrast-enhanced ultrasound (CEUS) imaging combined with artificial intelligence show promise in improving prediction accuracy. CEUS offers real-time visualization of tumor vascularity, providing unique insights into MVI characteristics. This study proposes a novel deep relation reasoning approach to address the challenges of modeling intricate temporal relationships and extracting complex spatial features from CEUS video frames. Our method integrates CEUS video sequences and introduces a visual graph reasoning framework that correlates intratumoral and peritumoral features across various imaging phases. The system employs dual-path feature extraction, MVI pattern topology construction, Graph Convolutional Network learning, and an MVI pattern discovery module to capture complex features while providing interpretable results. Experimental findings demonstrate that our approach surpasses existing state-of-the-art models in accuracy, sensitivity, and specificity for MVI prediction. The system achieved superiors accuracy, sensitivity, specificity and AUC. These advancements promise to enhance HCC diagnosis and management, potentially revolutionizing patient care. The method’s robust performance, even with limited data, underscores its potential for practical clinical application in improving the efficacy and efficiency of HCC patient diagnosis and treatment planning.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102606"},"PeriodicalIF":5.4,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenyan Zhong , Zailiang Chen , Hailan Shen , Xinyi Liu , Wanqing Xiong , Hui Lui
{"title":"DFMF: Harnessing spectral-spatial synergy for MR image segmentation through Dual-Task Feature Mining Framework","authors":"Wenyan Zhong , Zailiang Chen , Hailan Shen , Xinyi Liu , Wanqing Xiong , Hui Lui","doi":"10.1016/j.compmedimag.2025.102603","DOIUrl":"10.1016/j.compmedimag.2025.102603","url":null,"abstract":"<div><div>Automated segmentation of Magnetic Resonance (MR) images plays a critical role in medical applications, including tumor delineation, organ volume measurement, and lesion tracking. While traditional supervised learning methods depend heavily on costly annotated data, MR images inherently contain rich anatomical information, such as the shape, size, and spatial relationships of organs and tissues. Effectively leveraging this information to enhance segmentation performance remains a significant challenge in current research. To address this, we propose a novel Dual-task Feature Mining Framework (DFMF), which integrates self-supervised and semi-supervised learning paradigms. DFMF simultaneously optimizes two complementary tasks: image inpainting and segmentation, enabling the extraction of richer and more discriminative feature representations. This dual-task mechanism enhances the model’s ability to capture complex anatomical structures, leading to superior segmentation performance. To maximize the utility of unannotated data, we introduce a Self-consistency Loss, which enforces consistency between inpainted and original images without requiring explicit data augmentation. Additionally, we design a Hybrid Receptive Field Network (HRFNet) as the backbone of DFMF, which effectively captures global frequency-domain information while preserving fine spatial details. Extensive experiments on four MR image datasets demonstrate that DFMF outperforms state-of-the-art segmentation methods, and ablation studies validate the contribution of each component from multiple perspectives.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102603"},"PeriodicalIF":5.4,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bilin Wang , Changda Lei , Kaicheng Hong , Xiuji Kan , Yifan Ouyang , Junbo Li , Yunbo Guo , Rui Li
{"title":"Diffusion-based image translation from white light to narrow-band imaging in gastrointestinal endoscopy","authors":"Bilin Wang , Changda Lei , Kaicheng Hong , Xiuji Kan , Yifan Ouyang , Junbo Li , Yunbo Guo , Rui Li","doi":"10.1016/j.compmedimag.2025.102605","DOIUrl":"10.1016/j.compmedimag.2025.102605","url":null,"abstract":"<div><div>Narrow-band imaging (NBI) enhances vascular and mucosal visualization, enabling early detection of gastrointestinal lesions. However, its adoption is limited by hardware constraints and costs, leaving white light endoscopy (WLE) as the widely used but diagnostically inferior modality. Translating WLE into realistic NBI-like images provides a scalable solution to improve diagnostic workflows, generate synthetic datasets, and facilitate multi-modality analysis. Translating WLE images into realistic NBI-like images is challenging due to the lack of paired WLE-NBI image datasets for training and the complex, varied nature of lesions in gastrointestinal endoscopy, which often involve rich details and subtle textures. In this study, we propose a novel diffusion-based framework tailored for WLE-to-NBI image translation. Leveraging stable diffusion with domain-specific enhancements, our method integrates LoRA fine-tuning to embed NBI-specific features and employs a self-attention injection mechanism to dynamically incorporate vascular and mucosal patterns while preserving the spatial structure and semantic integrity of the input WLE images. This approach ensures fine-grained feature translation and structural fidelity crucial for medical applications. Quantitative and qualitative experiments highlight the superiority of the proposed approach in generating high-fidelity NBI-like images. Furthermore, it demonstrates potential for data augmentation and robustness in long-range video frame registration, offering a reliable solution for enhancing clinical decision-making.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102605"},"PeriodicalIF":5.4,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144633616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}