Journal of imaging informatics in medicine最新文献

筛选
英文 中文
U-ConvNext: A Robust Approach to Glioma Segmentation in Intraoperative Ultrasound. U-ConvNext:术中超声对胶质瘤分割的鲁棒方法。
Journal of imaging informatics in medicine Pub Date : 2025-09-11 DOI: 10.1007/s10278-025-01648-7
Amir M Vahdani, Mahdiyeh Rahmani, Ahmad Pour-Rashidi, Alireza Ahmadian, Parastoo Farnia
{"title":"U-ConvNext: A Robust Approach to Glioma Segmentation in Intraoperative Ultrasound.","authors":"Amir M Vahdani, Mahdiyeh Rahmani, Ahmad Pour-Rashidi, Alireza Ahmadian, Parastoo Farnia","doi":"10.1007/s10278-025-01648-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01648-7","url":null,"abstract":"<p><p>Intraoperative tumor imaging is critical to achieving maximal safe resection during neurosurgery, especially for low-grade glioma resection. Given the convenience of ultrasound as an intraoperative imaging modality, but also the limitations of the ultrasound modality and the time-consuming process of manual tumor segmentation, we propose a learning-based model for the accurate segmentation of low-grade gliomas in ultrasound images. We developed a novel U-net-based architecture adopting the block architecture of the ConvNext V2 model, titled U-ConvNext, which also incorporates various architectural improvements including global response normalization, fine-tuned kernel sizes, and inception layers. We also adopted the CutMix data augmentation technique for semantic segmentation, aiming for enhanced texture detection. Conformal segmentation, a novel approach to conformal prediction for binary semantic segmentation, was also developed for uncertainty quantification, providing calibrated measures of model uncertainty in a visual format. The proposed models were trained and evaluated on three subsets of images in the RESECT dataset and achieved hold-out test Dice scores of 84.63%, 74.52%, and 90.82% on the \"before,\" \"during,\" and \"after\" subsets, respectively, which indicates increases of ~ 13-31% compared to the state of the art. Furthermore, external evaluation on the ReMIND dataset indicated a robust performance (dice score of 79.17% [95% CI: 77.82-81.62] and only a moderate decline of < 3% in expected calibration error. Our approach integrates various innovations in model design, model training, and uncertainty quantification, achieving improved results on the segmentation of low-grade glioma in ultrasound images during neurosurgery.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145042875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Enhancement Filter Application in Panoramic Radiographs in the Detection of External Root Resorption in Mandibular Second Molars Caused by Impacted Third Molars. 全景x线片增强滤光片对第三磨牙阻生所致下颌第二磨牙外牙根吸收的影响。
Journal of imaging informatics in medicine Pub Date : 2025-09-11 DOI: 10.1007/s10278-025-01671-8
Matheus Sampaio-Oliveira, Lucas da Silva Ponce, Thaisa Pinheiro Silva, Rocharles Cavalcante Fontelene, Amanda Farias-Gomes, Matheus L Oliveira, Deborah Queiroz Freitas
{"title":"Influence of Enhancement Filter Application in Panoramic Radiographs in the Detection of External Root Resorption in Mandibular Second Molars Caused by Impacted Third Molars.","authors":"Matheus Sampaio-Oliveira, Lucas da Silva Ponce, Thaisa Pinheiro Silva, Rocharles Cavalcante Fontelene, Amanda Farias-Gomes, Matheus L Oliveira, Deborah Queiroz Freitas","doi":"10.1007/s10278-025-01671-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01671-8","url":null,"abstract":"<p><p>To evaluate the influence of enhancement filters on panoramic radiographs for the detection of external root resorption (ERR) in lower second molars caused by impacted third molars. One hundred and twenty-four panoramic radiographs with 182 lower third molars were selected. The corresponding cone-beam computed tomography scans were assessed to create the reference standard. The panoramic radiographs were then exported with and without four enhancement filters (Sharpness, Invert, Invert + Sharpness, and Equalized + Magnification) and independently evaluated by five examiners using a five-point scale for ERR presence. The area under the ROC curve (AUC), sensitivity, and specificity were calculated, and differences between filters were analyzed using ANOVA and Tukey's test. Intra- and inter-examiner agreements were assessed using weighted Kappa index. Diagnostic values were similar across filters for AUC (0.56 to 0.61; p = 0.359), but sensitivity was lower (0.25 to 0.50; p = 0.004) and specificity higher (p = 0.002) for the Invert and Invert + Sharpness filters than for the other situations. Intra-examiner agreement was moderate, while inter-examiner agreement was fair. The diagnostic accuracy of ERR in mandibular second molars on panoramic radiographs is unaffected by filters, but sensitivity decreases with Invert and Invert + Sharpness filters.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145042847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing a Resource-Light and Low-Code Large Language Model System for Information Extraction from Mammography Reports: A Pilot Study. 实现一个轻资源和低代码的大语言模型系统,用于从乳房x光检查报告中提取信息:一项试点研究。
Journal of imaging informatics in medicine Pub Date : 2025-09-10 DOI: 10.1007/s10278-025-01659-4
Fabio Dennstädt, Simon Fauser, Nikola Cihoric, Max Schmerder, Paolo Lombardo, Grazia Maria Cereghetti, Sandro von Däniken, Thomas Minder, Jaro Meyer, Lawrence Chiang, Roberto Gaio, Luc Lerch, Irina Filchenko, Daniel Reichenpfader, Kerstin Denecke, Caslav Vojvodic, Igor Tatalovic, André Sander, Janna Hastings, Daniel M Aebersold, Hendrik von Tengg-Kobligk, Knud Nairz
{"title":"Implementing a Resource-Light and Low-Code Large Language Model System for Information Extraction from Mammography Reports: A Pilot Study.","authors":"Fabio Dennstädt, Simon Fauser, Nikola Cihoric, Max Schmerder, Paolo Lombardo, Grazia Maria Cereghetti, Sandro von Däniken, Thomas Minder, Jaro Meyer, Lawrence Chiang, Roberto Gaio, Luc Lerch, Irina Filchenko, Daniel Reichenpfader, Kerstin Denecke, Caslav Vojvodic, Igor Tatalovic, André Sander, Janna Hastings, Daniel M Aebersold, Hendrik von Tengg-Kobligk, Knud Nairz","doi":"10.1007/s10278-025-01659-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01659-4","url":null,"abstract":"<p><p>Large language models (LLMs) have been successfully used for data extraction from free-text radiology reports. Most current studies were conducted with LLMs accessed via an application programming interface (API). We evaluated the feasibility of using open-source LLMs, deployed on limited local hardware resources for data extraction from free-text mammography reports, using a common data element (CDE)-based structure. Seventy-nine CDEs were defined by an interdisciplinary expert panel, reflecting real-world reporting practice. Sixty-one reports were classified by two independent researchers to establish ground truth. Five different open-source LLMs deployable on a single GPU were used for data extraction using the general-classifier Python package. Extractions were performed for five different prompt approaches with calculation of overall accuracy, micro-recall and micro-F1. Additional analyses were conducted using thresholds for the relative probability of classifications. High inter-rater agreement was observed between manual classifiers (Cohen's kappa 0.83). Using default prompts, the LLMs achieved accuracies of 59.2-72.9%. Chain-of-thought prompting yielded mixed results, while few-shot prompting led to decreased accuracy. Adaptation of the default prompts to precisely define classification tasks improved performance for all models, with accuracies of 64.7-85.3%. Setting certainty thresholds further improved accuracies to > 90% but reduced the coverage rate to < 50%. Locally deployed open-source LLMs can effectively extract information from mammography reports, maintaining compatibility with limited computational resources. Selection and evaluation of the model and prompting strategy are critical. Clear, task-specific instructions appear crucial for high performance. Using a CDE-based framework provides clear semantics and structure for the data extraction.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-Learning System for Automatic Measurement of the Femorotibial Rotational Angle on Lower-Extremity Computed Tomography. 下肢计算机断层扫描中股胫旋转角自动测量的深度学习系统。
Journal of imaging informatics in medicine Pub Date : 2025-09-10 DOI: 10.1007/s10278-025-01641-0
Sheen-Woo Lee, Gi Pyo Lee, Ieun Yoon, Young Jae Kim, Kwang Gi Kim
{"title":"Deep-Learning System for Automatic Measurement of the Femorotibial Rotational Angle on Lower-Extremity Computed Tomography.","authors":"Sheen-Woo Lee, Gi Pyo Lee, Ieun Yoon, Young Jae Kim, Kwang Gi Kim","doi":"10.1007/s10278-025-01641-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01641-0","url":null,"abstract":"<p><p>To develop and validate a deep-learning-based algorithm for automatic identification of anatomical landmarks and calculating femoral and tibial version angles (FTT angles) on lower-extremity CT scans. In this IRB-approved, retrospective study, lower-extremity CT scans from 270 adult patients (median age, 69 years; female to male ratio, 235:35) were analyzed. CT data were preprocessed using contrast-limited adaptive histogram equalization and RGB superposition to enhance tissue boundary distinction. The Attention U-Net model was trained using the gold standard of manual labeling and landmark drawing, enabling it to segment bones, detect landmarks, create lines, and automatically measure the femoral version and tibial torsion angles. The model's performance was validated against manual segmentations by a musculoskeletal radiologist using a test dataset. The segmentation model demonstrated 92.16%±0.02 sensitivity, 99.96%±<0.01 specificity, and 2.14±2.39 HD95, with a Dice similarity coefficient (DSC) of 93.12%±0.01. Automatic measurements of femoral and tibial torsion angles showed good correlation with radiologists' measurements, with correlation coefficients of 0.64 for femoral and 0.54 for tibial angles (p < 0.05). Automated segmentation significantly reduced the measurement time per leg compared to manual methods (57.5 ± 8.3 s vs. 79.6 ± 15.9 s, p < 0.05). We developed a method to automate the measurement of femorotibial rotation in continuous axial CT scans of patients with osteoarthritis (OA) using a deep-learning approach. This method has the potential to expedite the analysis of patient data in busy clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Analysis of Total Cost of Ownership: Commercial-Grade versus Diagnostic-Grade Displays in Remote Radiology Workstations. 总拥有成本的比较分析:商业级与诊断级显示器在远程放射工作站。
Journal of imaging informatics in medicine Pub Date : 2025-09-08 DOI: 10.1007/s10278-025-01651-y
Katie Hulme, Jennifer Arnold, Ryan Thomas, Roy Kittelberger, Namita Gandhi, Douglas Nachand, Po-Hao Chen
{"title":"Comparative Analysis of Total Cost of Ownership: Commercial-Grade versus Diagnostic-Grade Displays in Remote Radiology Workstations.","authors":"Katie Hulme, Jennifer Arnold, Ryan Thomas, Roy Kittelberger, Namita Gandhi, Douglas Nachand, Po-Hao Chen","doi":"10.1007/s10278-025-01651-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01651-y","url":null,"abstract":"<p><p>With the increasing shift towards remote radiology work, institutions face the challenge of balancing cost-effectiveness with operational reliability. This experiential report presents a comparative analysis of the total cost of ownership (TCO) of commercial-grade displays (WCDs) and diagnostic-grade displays (WDDs) in remote diagnostic stations. We evaluate direct and indirect costs associated with each display type using activity-based costing, focusing on deployment, quality control (QC) processes, and ongoing maintenance. Our findings suggest that while CGDs offer significant initial cost savings, their long-term maintenance requirements, particularly in manual QC, increase recurring expenses. The crossover point for CGDs with monthly QC occurs at approximately five years against mid-level DGDs, potentially making DGDs a more sustainable option over time.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Implementation of an Ambulatory Orders-Based Ophthalmology Imaging Workflow. 基于门诊医嘱的眼科成像工作流程的开发与实现。
Journal of imaging informatics in medicine Pub Date : 2025-09-08 DOI: 10.1007/s10278-025-01660-x
Michael K Oswald, Cieara Presley, Laurie A Perry, JaTawna Bush, Bessie Ganim, David Hulefeld, Jay Patel, Melissa Scott, Veeral S Shah, Evan Slavik, Sarah Smith, Kelli Vieson, Fred Walker, Alexander J Towbin
{"title":"Development and Implementation of an Ambulatory Orders-Based Ophthalmology Imaging Workflow.","authors":"Michael K Oswald, Cieara Presley, Laurie A Perry, JaTawna Bush, Bessie Ganim, David Hulefeld, Jay Patel, Melissa Scott, Veeral S Shah, Evan Slavik, Sarah Smith, Kelli Vieson, Fred Walker, Alexander J Towbin","doi":"10.1007/s10278-025-01660-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01660-x","url":null,"abstract":"<p><strong>Background: </strong>Ocular imaging is essential to the diagnosis and management of eye disease, yet standardized imaging workflows remain underdeveloped in the eye care setting. This manuscript describes the design and implementation of an orders-based imaging workflow for ambulatory ophthalmology integrated with the electronic health record and enterprise imaging systems.</p><p><strong>Methods: </strong>We developed a DICOM-compliant workflow for pediatric ophthalmology imaging that supports HL7 integration, DICOM modality worklists, and enterprise archive storage. Workflow steps were automated where possible, including imaging order placement, metadata application, and documentation. Pre- and post-implementation workflows were evaluated for efficiency, measuring mouse clicks, keystrokes, and time to completion. Imaging study volumes and billing data were collected following implementation.</p><p><strong>Results: </strong>Thirteen imaging devices across six modalities and five locations were integrated. The new workflow reduced manual data entry and enabled structured documentation. Post-implementation, 16,267 imaging studies were completed, generating $11.4 million in billed charges. Workflow efficiency improved, with keystrokes and mouse clicks reduced by 40-86% depending on modality. Time savings were observed in most modalities, although optical coherence tomography and B-scan ultrasound required more time due to order-specific imaging steps.</p><p><strong>Conclusion: </strong>An orders-based imaging workflow for ophthalmology can improve efficiency, standardization, and interoperability while enabling proper image archiving and billing. Challenges remain with vendor variability in DICOM support and image output formats. These findings support broader adoption of standardized imaging practices in ophthalmology.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Breast Density Assessment in Mammograms Through Artificial Intelligence. 利用人工智能增强乳房x光检查中的乳腺密度评估。
Journal of imaging informatics in medicine Pub Date : 2025-09-05 DOI: 10.1007/s10278-025-01657-6
Naila Camila da Rocha, Abner Macola Pacheco Barbosa, Yaron Oliveira Schnr, Lucas Dias Borges Peres, Luis Gustavo Modelli de Andrade, Guilherme Jordao de Magalhaes Rosa, Eduardo Carvalho Pessoa, Jose Eduardo Corrente, Liciana Vaz de Arruda Silveira
{"title":"Enhancing Breast Density Assessment in Mammograms Through Artificial Intelligence.","authors":"Naila Camila da Rocha, Abner Macola Pacheco Barbosa, Yaron Oliveira Schnr, Lucas Dias Borges Peres, Luis Gustavo Modelli de Andrade, Guilherme Jordao de Magalhaes Rosa, Eduardo Carvalho Pessoa, Jose Eduardo Corrente, Liciana Vaz de Arruda Silveira","doi":"10.1007/s10278-025-01657-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01657-6","url":null,"abstract":"<p><p>Breast cancer is the leading cause of cancer-related deaths among women worldwide. Early detection through mammography significantly improves outcomes, with breast density acting as both a risk factor and a key interpretive feature. Although the Breast Imaging Reporting and Data System (BI-RADS) provides standardized density categories, assessments are often subjective and variable. While automated tools exist, most are proprietary and resource-intensive, limiting their use in underserved settings. There is a critical need for accessible, low-cost AI solutions that provide consistent breast density classification. This study aims to develop and evaluate an open-source, computer vision-based approach using deep learning techniques for objective breast density assessment in mammography images, with a focus on accessibility, consistency, and applicability in resource-limited healthcare environments. Our approach integrates a custom-designed convolutional neural network (CD-CNN) with an extreme learning machine (ELM) layer for image-based breast density classification. The retrospective dataset includes 10,371 full-field digital mammography images, previously categorized by radiologists into one of four BI-RADS breast density categories (A-D). The proposed model achieved a testing accuracy of 95.4%, with a specificity of 98.0% and a sensitivity of 92.5%. Agreement between the automated breast density classification and the specialists' consensus was strong, with a weighted kappa of 0.90 (95% CI: 0.82-0.98). On the external and independent mini-MIAS dataset, the model achieved an accuracy of 73.9%, a precision of 81.1%, a specificity of 87.3%, and a sensitivity of 75.1%, which is comparable to the performance reported in previous studies using this dataset. The proposed approach advances breast density assessment in mammograms, enhancing accuracy and consistency to support early breast cancer detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145002390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Semi-federated Learning for Multimodal Cardiac Imaging and Risk Stratification: A Privacy-Preserving Framework. 多模态心脏成像和风险分层的可解释半联合学习:隐私保护框架。
Journal of imaging informatics in medicine Pub Date : 2025-09-05 DOI: 10.1007/s10278-025-01643-y
XianFang Liu, ShunLei Li, Qin Zhu, ShaoKun Xu, QinYang Jin
{"title":"Interpretable Semi-federated Learning for Multimodal Cardiac Imaging and Risk Stratification: A Privacy-Preserving Framework.","authors":"XianFang Liu, ShunLei Li, Qin Zhu, ShaoKun Xu, QinYang Jin","doi":"10.1007/s10278-025-01643-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01643-y","url":null,"abstract":"<p><p>The growing heterogeneity of cardiac patient data from hospitals and wearables necessitates predictive models that are tailored, comprehensible, and safeguard privacy. This study introduces PerFed-Cardio, a lightweight and interpretable semi-federated learning (Semi-FL) system for real-time cardiovascular risk stratification utilizing multimodal data, including cardiac imaging, physiological signals, and electronic health records (EHR). In contrast to conventional federated learning, where all clients engage uniformly, our methodology employs a personalized Semi-FL approach that enables high-capacity nodes (e.g., hospitals) to conduct comprehensive training, while edge devices (e.g., wearables) refine shared models via modality-specific subnetworks. Cardiac MRI and echocardiography pictures are analyzed via lightweight convolutional neural networks enhanced with local attention modules to highlight diagnostically significant areas. Physiological characteristics (e.g., ECG, activity) and EHR data are amalgamated through attention-based fusion layers. Model transparency is attained using Local Interpretable Model-agnostic Explanations (LIME) and Grad-CAM, which offer spatial and feature-level elucidations for each prediction. Assessments on authentic multimodal datasets from 123 patients across five simulated institutions indicate that PerFed-Cardio attains an AUC-ROC of 0.972 with an inference latency of 130 ms. The customized model calibration and targeted training diminish communication load by 28%, while maintaining an F1-score over 92% in noisy situations. These findings underscore PerFed-Cardio as a privacy-conscious, adaptive, and interpretable system for scalable cardiac risk assessment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145002477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mask-Guided and Fidelity-Constrained Deep Learning Model for Accurate Translation of Brain CT Images to Diffusion MRI Images in Acute Stroke Patients. 基于面具引导和保真度约束的深度学习模型对急性脑卒中患者脑CT图像到弥散性MRI图像的准确转换。
Journal of imaging informatics in medicine Pub Date : 2025-09-02 DOI: 10.1007/s10278-025-01649-6
Muhammad Adil Khalil, Mariusz Bajger, Anthony Skeats, Chris Delnooz, Andrew Dwyer, Gobert Lee
{"title":"Mask-Guided and Fidelity-Constrained Deep Learning Model for Accurate Translation of Brain CT Images to Diffusion MRI Images in Acute Stroke Patients.","authors":"Muhammad Adil Khalil, Mariusz Bajger, Anthony Skeats, Chris Delnooz, Andrew Dwyer, Gobert Lee","doi":"10.1007/s10278-025-01649-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01649-6","url":null,"abstract":"<p><p>The early and precise diagnosis of stroke plays an important role in its treatment planning. Computed Tomography (CT) is utilised as a first diagnostic tool for quick diagnosis and to rule out haemorrhage. Diffusion Magnetic Resonance Imaging (MRI) provides superior sensitivity in comparison to CT for detecting early acute ischaemia and small lesions. However, the long scan time and limited availability of MRI make it not feasible for emergency settings. To deal with this problem, this study presents a brain mask-guided and fidelity-constrained cycle-consistent generative adversarial network for translating CT images into diffusion MRI images for stroke diagnosis. A brain mask is concatenated with the input CT image and given as input to the generator to encourage more focus on the critical foreground areas. A fidelity-constrained loss is utilised to preserve details for better translation results. A publicly available dataset, A Paired CT-MRI Dataset for Ischemic Stroke Segmentation (APIS) is utilised to train and test the models. The proposed method yields MSE 197.45 [95% CI: 180.80, 214.10], PSNR 25.50 [95% CI: 25.10, 25.92], and SSIM 88.50 [95% CI: 87.50, 89.50] on a testing set. The proposed method significantly improves techniques based on UNet, cycle-consistent generative adversarial networks (CycleGAN) and Attention generative adversarial networks (GAN). Furthermore, an ablation study was performed, which demonstrates the effectiveness of incorporating fidelity-constrained loss and brain mask information as a soft guide in translating CT images into diffusion MRI images. The experimental results demonstrate that the proposed approach has the potential to support faster and precise diagnosis of stroke.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144985170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Evaluation of Deep Learning and Foundation Model Embeddings for Osteoarthritis Feature Classification in Knee Radiographs. 深度学习和基础模型嵌入在膝关节x线片骨关节炎特征分类中的比较评价。
Journal of imaging informatics in medicine Pub Date : 2025-09-02 DOI: 10.1007/s10278-025-01636-x
Mohammadreza Chavoshi, Hari Trivedi, Janice Newsome, Aawez Mansuri, Frank Li, Theo Dapamede, Bardia Khosravi, Judy Gichoya
{"title":"Comparative Evaluation of Deep Learning and Foundation Model Embeddings for Osteoarthritis Feature Classification in Knee Radiographs.","authors":"Mohammadreza Chavoshi, Hari Trivedi, Janice Newsome, Aawez Mansuri, Frank Li, Theo Dapamede, Bardia Khosravi, Judy Gichoya","doi":"10.1007/s10278-025-01636-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01636-x","url":null,"abstract":"<p><p>Foundation models (FM) offer a promising alternative to supervised deep learning (DL) by enabling greater flexibility and generalizability without relying on large, labeled datasets. This study investigates the performance of supervised DL models and pre-trained FM embeddings in classifying radiographic features related to knee osteoarthritis. We analyzed 44,985 knee radiographs from the Osteoarthritis Initiative dataset. Two convolutional neural network models (ResNet18 and ConvNeXt-Small) were trained to classify osteophytes, joint space narrowing, subchondral sclerosis, and Kellgren-Lawrence grades (KLG). These models were compared against two FM: BiomedCLIP, a multimodal vision-language model pre-trained on diverse medical images and text, and RAD-DINO vision transformer model pre-trained exclusively on chest radiographs. We extracted image embeddings from both FMs and used XGBoost classifiers to perform downstream classification. Performance was assessed using a comprehensive classification metrics appropriate for binary and multi-class classification tasks. DL models outperformed FM-based approaches across all tasks. ConvNeXt achieved the highest performance in predicting KLG, with a weighted Cohen's kappa of 0.880 and higher AUC in binary tasks. BiomedCLIP and RAD-DINO performed similarly, and BiomedCLIP's prior exposure to knee radiographs during pretraining led to only slight improvements. Zero-shot classification using BiomedCLIP correctly identified 91.14% of knee radiographs, with most failures associated with low image quality. Grad-CAM visualizations revealed DL models, particularly ConvNeXt, reliably focused on clinically relevant regions. While FMs offer promising utility in auxiliary imaging tasks, supervised DL remains superior for fine-grained radiographic feature classification in domains with limited pretraining representation, such as musculoskeletal imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144985201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信