Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Semantic token-guided hierarchical adversarial knowledge distillation for 3D abdominal organ segmentation. 语义标记引导的分层对抗知识蒸馏用于三维腹部器官分割。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-05-04 DOI: 10.1016/j.compmedimag.2026.102773
Xiangchun Yu, Guangjun Zhu, Ata Jahangir Moshayedi, Wang Xin, Jianqing Wu, Jian Zheng, Hechang Chen
{"title":"Semantic token-guided hierarchical adversarial knowledge distillation for 3D abdominal organ segmentation.","authors":"Xiangchun Yu, Guangjun Zhu, Ata Jahangir Moshayedi, Wang Xin, Jianqing Wu, Jian Zheng, Hechang Chen","doi":"10.1016/j.compmedimag.2026.102773","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2026.102773","url":null,"abstract":"<p><p>Deploying computationally intensive 3D models for accurate abdominal organ segmentation remains a significant challenge, particularly in resource-constrained clinical settings. Knowledge distillation (KD) offers a viable path by transferring knowledge from a cumbersome teacher to a compact student network. However, prevailing KD methods are hampered by two key limitations: inadequate modeling of dynamic semantic relationships across multi-scale features, leading to misalignment in low-contrast regions, and an inability to bridge the architectural heterogeneity gap (e.g., Transformer teacher to CNN student), resulting in feature distribution discrepancies. To overcome these issues, we propose STM_HAC-KD, a novel KD framework that synergistically integrates a Semantic Token-guided Multi-scale KD (STM-KD) module and a Hierarchical Multi-scale Patch-consistent Adversarial Alignment KD (HMPA<sup>2</sup>-KD) module. STM-KD employs learnable, category-aware semantic tokens to establish dynamic cross-scale interactions, effectively correlating shallow structural details with deep semantic context. Complementarily, HMPA<sup>2</sup>-KD leverages our proposed lightweight 3D Gap Elimination PatchGAN discriminators to adversarially align student feature distributions with the teacher's across multiple scales, thereby eliminating segmentation errors near the boundaries. Comprehensive experiments on the WORD and BTCV datasets demonstrate that STM_HAC-KD consistently outperforms advanced comparative methods, achieving superior Dice Similarity Coefficient (DSC) and significant reductions in 95% Hausdorff Distance (HD95), particularly in boundary-ambiguous regions. This work establishes an efficient and precise paradigm for 3D abdominal organ segmentation, with direct relevance to intelligent clinical decision systems. Our code is available at: https://github.com/oneplus1x/STM_HAC-KD.</p>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"132 ","pages":"102773"},"PeriodicalIF":4.9,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven retinal image quality monitoring in diabetic retinopathy screening: A retrospective study identifying actionable insights to improve imaging protocols 糖尿病视网膜病变筛查中人工智能驱动的视网膜图像质量监测:一项回顾性研究,确定了改进成像方案的可行见解。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-05-01 Epub Date: 2026-04-22 DOI: 10.1016/j.compmedimag.2026.102771
Imanol Pinto , Álvaro Olazarán , David Jurio , Borja de la Osa , Miguel Sainz , Aritz Oscoz , Jerónimo Ballaz , Javier Gorricho , Mikel Galar , José Andonegui
{"title":"AI-driven retinal image quality monitoring in diabetic retinopathy screening: A retrospective study identifying actionable insights to improve imaging protocols","authors":"Imanol Pinto ,&nbsp;Álvaro Olazarán ,&nbsp;David Jurio ,&nbsp;Borja de la Osa ,&nbsp;Miguel Sainz ,&nbsp;Aritz Oscoz ,&nbsp;Jerónimo Ballaz ,&nbsp;Javier Gorricho ,&nbsp;Mikel Galar ,&nbsp;José Andonegui","doi":"10.1016/j.compmedimag.2026.102771","DOIUrl":"10.1016/j.compmedimag.2026.102771","url":null,"abstract":"<div><h3>Objective:</h3><div>To verify the applicability of retinal image quality (RIQ) models in real-world diabetic retinopathy screening for uncovering actionable insights to improve imaging protocols.</div></div><div><h3>Materials and Methods:</h3><div>NaIA-RD, a custom AI system developed by the University Hospital of Navarre (Spain) for diabetic retinopathy screening, was employed to monitor retinal image quality (RIQ) across multiple imaging sites within the hospital. A large retrospective dataset consisting of 55,801 routine retinal images collected over a period of 3.6 years was compiled for this purpose. Additionally, two convolutional neural networks, trained on external public datasets (EyeQ and DeepDRiD), were used as independent comparators. The longitudinal RIQ outputs from NaIA-RD, EyeQ, and DeepDRiD models were then analyzed to assess their alignment with clinical decisions.</div></div><div><h3>Results:</h3><div>All three models identified similar differences in RIQ across imaging sites, camera models, and imaging technicians. Ungradable rates varied widely among sites, ranging from 2.23% to 28.23%. These differences evolved over time due to changes in data distribution, or data drifts. Among the models, the one trained with DeepDRiD demonstrated the highest agreement with clinicians, achieving an Average Precision of 0.431, compared to 0.389 for NaIA-RD and 0.392 for EyeQ.</div></div><div><h3>Discussion:</h3><div>Monitoring RIQ revealed actionable insights, such as identifying differences related to camera models and technician experience, suggesting potential benefits from targeted training and imaging protocol standardization. Comparing outputs from multiple models strengthened the reliability of observed trends.</div></div><div><h3>Conclusion:</h3><div>AI tools with modular design and detailed RIQ scoring can effectively monitor clinical imaging workflows, enabling data-driven healthcare quality improvement initiatives.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"131 ","pages":"Article 102771"},"PeriodicalIF":4.9,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147788117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLDPC: Slide-Level Dual-Prompt Collaboration for few-shot whole slide image classification SLDPC:幻灯片级双提示协作,用于少量全幻灯片图像分类。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-05-01 Epub Date: 2026-04-21 DOI: 10.1016/j.compmedimag.2026.102768
Lulin Yuan , Yifeng Zheng , Weiqiang Liu , Hong Zhao , Wenjie Zhang , Baoya Wei , Liming Chen
{"title":"SLDPC: Slide-Level Dual-Prompt Collaboration for few-shot whole slide image classification","authors":"Lulin Yuan ,&nbsp;Yifeng Zheng ,&nbsp;Weiqiang Liu ,&nbsp;Hong Zhao ,&nbsp;Wenjie Zhang ,&nbsp;Baoya Wei ,&nbsp;Liming Chen","doi":"10.1016/j.compmedimag.2026.102768","DOIUrl":"10.1016/j.compmedimag.2026.102768","url":null,"abstract":"<div><div>Digital pathology standardizes diagnostic workflows through the digitization of conventional slides and the integration of algorithmic analysis. Few-shot Weakly Supervised Whole Slide Image (WSI) Classification (FSWC) represents a critical challenge in digital pathology. Conventional Multiple Instance Learning (MIL) methods rely on large volumes of annotated data and are susceptible to distribution shifts. Vision-Language Model (VLM)-based prompt learning methods enable parameter-efficient few-shot learning but are limited to patch-level feature aggregation, failing to model slide-level diagnostic information. As slide-level information is crucial for understanding tissue architecture and lesion distribution, we propose a Slide-Level Dual-Prompt Collaboration (SLDPC) framework for the FSWC task. Specifically, SLDPC leverages the representation learning capability of a slide-level VLM to perform prompt tuning directly at the slide level. A base prompt <span><math><mi>P</mi></math></span> is first obtained through continuous prompt initialization training and subsequently cloned to derive a parallel prompt <span><math><msup><mrow><mi>P</mi></mrow><mrow><mo>′</mo></mrow></msup></math></span>. In addition, bidirectional InfoNCE loss is employed to enhance feature-level alignment. During inference, a weighted fusion mechanism is introduced to combine both prompts and achieve efficient adaptation of slide-level multimodal representations. Experimental evaluation on four datasets validates the superiority of SLDPC. The results demonstrate that slide-level prompt learning effectively addresses FSWC challenges and improves model performance.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"131 ","pages":"Article 102768"},"PeriodicalIF":4.9,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147788076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-level information fusion for explainable diagnosis of melanoma using dermoscopic images. 多层次信息融合用于皮肤镜下黑色素瘤的可解释诊断。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-04-30 DOI: 10.1016/j.compmedimag.2026.102761
Ruitong Sun, Mohammad Rostami
{"title":"Multi-level information fusion for explainable diagnosis of melanoma using dermoscopic images.","authors":"Ruitong Sun, Mohammad Rostami","doi":"10.1016/j.compmedimag.2026.102761","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2026.102761","url":null,"abstract":"<p><p>Melanoma is a prevalent and lethal form of cancer that is highly treatable when diagnosed at early stages. However, early detection remains challenging due to the subtle visual differences between malignant and benign skin lesions. We present a novel deep learning framework that integrates multi-level information fusion for explainable melanoma diagnosis. Our approach combines image segmentation and classification within a unified architecture, where segmentation masks highlighting clinically relevant regions are explicitly fused with classification features to guide diagnostic predictions. This fusion mechanism not only improves diagnostic performance but also provides clinically interpretable visual explanations that mimic the assessment process of expert dermatologists. Furthermore, we incorporate self-supervised learning to alleviate the dependency on large-scale annotated datasets, which are often costly and difficult to obtain in medical domains. Evaluated on the ISIC 2018 dataset, our framework achieves 83% AUC for melanoma classification compared to 76% for the baseline, while producing indicator localization masks that outperform existing methods across five clinical indicators. Notably, localization of rare indicators such as streaks improves from 5.16% to 18.66% Continuous Dice coefficient, and negative networks from 15.78% to 22.22%. The code is available at https://github.com/rusu4943/Bio-Unet.</p>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"132 ","pages":"102761"},"PeriodicalIF":4.9,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147845552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive prompting for dual-task learning: Towards high-quality MRI reconstruction from unidentified degradation 双任务学习的自适应提示:从未知退化中获得高质量的MRI重建。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-04-01 Epub Date: 2026-03-28 DOI: 10.1016/j.compmedimag.2026.102757
Ning Jiang , Zhengyong Huang , Xingwen Sun , Peng Chen , Yuan He , Thang Cao , Yao Sui
{"title":"Adaptive prompting for dual-task learning: Towards high-quality MRI reconstruction from unidentified degradation","authors":"Ning Jiang ,&nbsp;Zhengyong Huang ,&nbsp;Xingwen Sun ,&nbsp;Peng Chen ,&nbsp;Yuan He ,&nbsp;Thang Cao ,&nbsp;Yao Sui","doi":"10.1016/j.compmedimag.2026.102757","DOIUrl":"10.1016/j.compmedimag.2026.102757","url":null,"abstract":"<div><div>Degradation poses significant challenges to magnetic resonance imaging (MRI) scans. Unfortunately, degradation is inevitable and difficult to identify. Common sources of degradation arise from factors such as subject motion, thermal noise, and scan expedition. Recent reconstruction-based methods primarily address reconstructions for scans suffering from a single, well-defined degradation source. In practical scenarios, however, multiple and unidentified sources of degradation frequently arise simultaneously within a single scan. Existing solutions therefore rely on sequential pipelines that first identify degradation sources and then apply degradation-specific models, which may overlook degradation sources and compromise scan integrity through repeated reconstructions. This study targets single-stage reconstruction of MRI scans affected by a combination of various unidentified degradation sources. We proposed a unified reconstruction framework based on a dual-task learning strategy with prompt adaptation. Our technique focuses on learning effective degradation representations from degraded images, facilitating high-quality reconstruction of MRI scans with both high spatial resolution and elevated signal-to-noise ratio (SNR) while mitigating motion artifacts. We evaluated our method on three public datasets comprising clean and degraded MRI scans from 150 subjects, including unidentified degradations from five sources and real in-scanner motion artifacts. Experimental results demonstrated that our approach surpassed leading methods in terms of motion correction, SNR improvement, and resolution enhancement. The code is available at: <span><span>https://github.com/NingJiang-git/UniRecon</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"130 ","pages":"Article 102757"},"PeriodicalIF":4.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147596071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid deep learning framework for epileptic seizure prediction using scalp and intracranial EEG data 利用头皮和颅内脑电图数据预测癫痫发作的混合深度学习框架。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-04-01 Epub Date: 2026-03-28 DOI: 10.1016/j.compmedimag.2026.102759
Izza Mujeeb Ahmad, Bisma Ashar, Shehryar Munir, Aamir Wali, Muhammad Saif ul Islam
{"title":"A hybrid deep learning framework for epileptic seizure prediction using scalp and intracranial EEG data","authors":"Izza Mujeeb Ahmad,&nbsp;Bisma Ashar,&nbsp;Shehryar Munir,&nbsp;Aamir Wali,&nbsp;Muhammad Saif ul Islam","doi":"10.1016/j.compmedimag.2026.102759","DOIUrl":"10.1016/j.compmedimag.2026.102759","url":null,"abstract":"<div><div>Epilepsy is a prevalent neurological disorder affecting over 50 million people globally, often impairing quality of life due to unpredictable and recurrent seizures. Despite advances in seizure prediction, challenges remain due to variability in EEG signals. EEG data exists in two primary modalities: non-invasive scalp EEG (sEEG) and invasive intracranial EEG (iEEG), and current deep learning models often exhibit limited generalizability across these modalities due to signal and structural differences. In this study, we propose a custom hybrid CNN-LSTM model optimized for scalp EEG, and assess its effectiveness across modalities by also evaluating it on invasive EEG datasets. We utilize the CHB-MIT (sEEG) and AES-Kaggle (iEEG) datasets employing a unified pipeline of preprocessing, signal transformation, and deep learning techniques. The model reported an accuracy of 98.84% on sEEG, using a 10-minute seizure anticipation window, outperforming conventional CNN, LSTM and several recent state-of-the-art models. On iEEG data, the hybrid model performed comparably well, achieving an impressive accuracy of 97.18%. The models are further validated using real-world scalp EEG data collected from the General Hospital Lahore, Pakistan. This additional validation demonstrates strong generalizability to clinical settings and confirms the practical applicability of the proposed approach. These findings not only highlight the effectiveness of the proposed approach for seizure prediction but also underscore its readiness for real-world deployment, potentially enabling reliable early warning systems to improve the lives of epilepsy patients.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"130 ","pages":"Article 102759"},"PeriodicalIF":4.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147576211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRFormer: 3D reconstruction of liver and tumor via gridding and transformer-based point cloud completion GRFormer:通过网格化和基于变压器的点云补全对肝脏和肿瘤进行三维重建
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-04-01 Epub Date: 2026-03-30 DOI: 10.1016/j.compmedimag.2026.102760
Xun Wang , Wenqian Yu , Gang Wang , Qing Yang , Hanyu Wang , Runqiu Feng , Zhijun Xia , Tongyu Han , Nuo Xu
{"title":"GRFormer: 3D reconstruction of liver and tumor via gridding and transformer-based point cloud completion","authors":"Xun Wang ,&nbsp;Wenqian Yu ,&nbsp;Gang Wang ,&nbsp;Qing Yang ,&nbsp;Hanyu Wang ,&nbsp;Runqiu Feng ,&nbsp;Zhijun Xia ,&nbsp;Tongyu Han ,&nbsp;Nuo Xu","doi":"10.1016/j.compmedimag.2026.102760","DOIUrl":"10.1016/j.compmedimag.2026.102760","url":null,"abstract":"<div><div>Computed Tomography (CT) images can provide detailed information about human organs and lesions. However, its two-dimensional (2D) representation lacks spatial three-dimensionality, making it difficult to visualize three-dimensional (3D) anatomical structures. Therefore reconstructing high-precision 3D shapes from 2D medical images has become a significant challenge in the field of computer vision and medical image analysis. To address this problem, we propose an innovative gridding and geometry-aware Transformer-based point cloud completion network (GRFormer) that can accurately reconstruct the 3D structure of liver and tumors based on 2D contour information. GRFormer adopts a dual-branch feature extractor design combined with a multi-stage point generation module, which achieves progressive reconstruction from coarse-grained to fine-grained. We conduct systematic experimental validation based on LiTS public dataset. The quantitative evaluation and qualitative visualization analysis jointly show that GRFormer is capable of high-fidelity reconstruction of liver and tumor 3D geometries. In addition, we validate the model on clinical data provided by Shandong Provincial Hospital, and the reconstruction results are highly consistent with the judgment of professional physicians, proving the validity and reliability of the model in the actual clinical environment. In cross-dataset tests, GRFormer demonstrates excellent generalization capabilities, providing reliable technical support for clinical diagnosis and treatment planning. The code is publicly available at:<span><span>https://github.com/yuwenqian0606/GRFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"130 ","pages":"Article 102760"},"PeriodicalIF":4.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147601663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating AI into clinical practice: Human-centered design requirements for next-generation sequencing workflows 将人工智能融入临床实践:下一代测序工作流程的以人为本的设计要求。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-04-01 Epub Date: 2026-03-28 DOI: 10.1016/j.compmedimag.2026.102758
Markus Plass , Andreas Holzinger , Robert Reihs , Heimo Müller
{"title":"Integrating AI into clinical practice: Human-centered design requirements for next-generation sequencing workflows","authors":"Markus Plass ,&nbsp;Andreas Holzinger ,&nbsp;Robert Reihs ,&nbsp;Heimo Müller","doi":"10.1016/j.compmedimag.2026.102758","DOIUrl":"10.1016/j.compmedimag.2026.102758","url":null,"abstract":"<div><div>Today, the integration of next-generation sequencing (NGS) into clinical genomics is increasingly AI-driven, with artificial intelligence (AI) underpinning every stage from data processing to decision support. While NGS enables rapid and scalable analysis of complex genetic information—paving the way for precision diagnostics and stratified treatment—the transition from potential to practice is hindered by fragmented workflows, limited usability, and non-standardized data interfaces. This paper introduces a design-oriented framework for embedding AI-powered NGS workflows into clinical decision support systems (CDSS), currently focused on genetic screening and tumor testing, but with components extendable to pathogen detection scenarios. DUXU (Design, User eXperience, Usability) is presented as a conceptual and methodological framework rather than a concrete implementation; its realization is intentionally flexible and must be adapted to the requirements, constraints, and objectives of specific clinical use cases. Future work will adapt data requirements (e.g., taxonomic classification instead of variant calling), functional workflows (e.g., microbial genome assembly), and stakeholder roles (e.g., microbiologists in antimicrobial stewardship). Grounded in real-world clinical environments and aligned with standards such as FHIR and GA4GH, we highlight the central role of AI in multimodal data interpretation, patient-specific visualization, and transparent, explainable decision-making. Anchored in DUXU principles, our approach addresses the socio-technical demands of clinical genomics and proposes actionable design requirements for interoperable, role-specific interfaces. This work advances the development of explainable, interpretable, trustworthy, and operationally embedded AI-based NGS systems into clinical practice.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"130 ","pages":"Article 102758"},"PeriodicalIF":4.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147596053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agent-MIRA: AI-orchestrated medical imaging agent for PET image retrieval and assistance agent - mira:人工智能编排的用于PET图像检索和辅助的医学成像剂
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-03-01 Epub Date: 2026-02-09 DOI: 10.1016/j.compmedimag.2026.102725
Rajat Vashistha , Sandra Brosda , Lauren G. Aoude , Jessica Ng , Parveen Kundu , Andrew P. Barbour , Viktor Vegh
{"title":"Agent-MIRA: AI-orchestrated medical imaging agent for PET image retrieval and assistance","authors":"Rajat Vashistha ,&nbsp;Sandra Brosda ,&nbsp;Lauren G. Aoude ,&nbsp;Jessica Ng ,&nbsp;Parveen Kundu ,&nbsp;Andrew P. Barbour ,&nbsp;Viktor Vegh","doi":"10.1016/j.compmedimag.2026.102725","DOIUrl":"10.1016/j.compmedimag.2026.102725","url":null,"abstract":"<div><div>Reporting on medical images can be time-consuming, especially in high-volume clinical settings. AI-agents designated to specific medical imaging tasks can potentially lead to improvements in clinical workflows. We present a prototype AI-agent to support clinical decision making by retrieving medical images which best match the patient images based on similarity and provide uncertainty estimation. The framework requires clinical metadata and PET images with lesion segmentation. A new patient's PET scan is processed by converting it to a feature vector representative of the image, which then enables the retrieval of the nearest feature vector neighbors by querying a database. Comparison between the radiomics and finetuned DINOv2 features was performed. Conditional uncertainty, an estimation based on feature significance, is calculated to state the level of confidence in similarity between patients. The AI agent, using DINOv2-derived features, retrieves a consistent set of patient cases that are phenotypically similar to the new patient. Each retrieved case is accompanied by clinical metadata, including cancer type, treatment history, and survival outcome. It also provides an estimate of the uncertainty in the matches and attention-based visualization to interpret the DINOv2 features. It is validated using eight independent patient test cases with benchmarking via clinical scoring to establish the level of support achieved for AI-agent orchestrated clinical decision-making. Scoring by the clinicians showed good correlation between the new patient and the retrieved database images with respect to the low uncertainty matches. We also integrated image-based retrieval with an entirely parallel text-embedding index of external clinical trials, thereby coupling case-based reasoning with evidence-based medicine in a single query interface using large language model.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"129 ","pages":"Article 102725"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation-aware Generative Reinforcement Network (GRN) for tissue layer segmentation in 3-D ultrasound images for chronic low-back pain (cLBP) assessment 基于分割感知的生成强化网络(GRN)用于慢性腰痛(cLBP)评估的三维超声图像组织层分割。
IF 4.9 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2026-03-01 Epub Date: 2026-02-26 DOI: 10.1016/j.compmedimag.2026.102736
Zixue Zeng , Xiaoyan Zhao , Matthew Cartier , Tong Yu , Jing Wang , Xin Meng , Zhiyu Sheng , Maryam Satarpour , John M. Cormack , Allison Bean , Ryan Nussbaum , Maya Maurer , Emily Landis-Walkenhorst , Dinesh Kumbhare , Kang Kim , Ajay D. Wasan , Jiantao Pu
{"title":"Segmentation-aware Generative Reinforcement Network (GRN) for tissue layer segmentation in 3-D ultrasound images for chronic low-back pain (cLBP) assessment","authors":"Zixue Zeng ,&nbsp;Xiaoyan Zhao ,&nbsp;Matthew Cartier ,&nbsp;Tong Yu ,&nbsp;Jing Wang ,&nbsp;Xin Meng ,&nbsp;Zhiyu Sheng ,&nbsp;Maryam Satarpour ,&nbsp;John M. Cormack ,&nbsp;Allison Bean ,&nbsp;Ryan Nussbaum ,&nbsp;Maya Maurer ,&nbsp;Emily Landis-Walkenhorst ,&nbsp;Dinesh Kumbhare ,&nbsp;Kang Kim ,&nbsp;Ajay D. Wasan ,&nbsp;Jiantao Pu","doi":"10.1016/j.compmedimag.2026.102736","DOIUrl":"10.1016/j.compmedimag.2026.102736","url":null,"abstract":"<div><div>Layer-wise segmentation of three-dimensional (3D) ultrasound for chronic lower back pain (cLBP) requires a large amount of labeled images. To mitigate this burden, we propose a Generative Reinforcement Network (GRN) that integrates a generative adversarial network (GAN) framework with a segmentation model. The generator is coupled to a segmentor via segmentation-aware feedback and regularized by a discriminator. At each iteration, the segmentation loss is back-propagated into the generator to produce easy-to-learn reconstructions that directly reduce downstream segmentation error (reinforcement augmentation, RAug), while adversarial feedback from the discriminator (PatchGAN) encourages realistic reconstructions. We also introduce segmentation-guided enhancement (SGE), where the pre-trained generator enhances input images at inference to improve segmentation. GRN has two variants: GRN-SEL, which uses RAug only, and GRN-SSL, which additionally applies interpolation-consistency training (ICT) on unlabeled data by interpolating generator-reconstructed pairs and enforcing prediction consistency. We evaluate GRN primarily on a fully annotated lumbar back ultrasound dataset (MUSCLE). Two public benchmark datasets (skin lesion, Kvasir) were also used to demonstrate its generalizability. On the MUSCLE dataset, GRN-SEL with SGE reduces labeling efforts by up to 70% while improving the Dice Similarity Coefficient (DSC) by 1.98% compared to the models trained on fully labeled datasets. Across all three datasets and label fractions, GRN consistently outperforms state-of-the-art semi-supervised methods. These results suggest the effectiveness of the GRN framework in optimizing segmentation performance with significantly less labeled data. The source code is publicly available at <span><span>https://github.com/Francisdadada/GRN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"129 ","pages":"Article 102736"},"PeriodicalIF":4.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书