Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Development and Evaluation of an Automated Histomorphometric Analysis Method for the Assessment of Implant Osseointegration. 一种评估种植体骨整合的自动组织形态分析方法的开发与评价。
Journal of imaging informatics in medicine Pub Date : 2025-07-28 DOI: 10.1007/s10278-025-01611-6
Lucas de Sousa Goulart Pereira, Jovânia Alves Oliveira, Elcio Marcantonio, André Ricardo Backes, Guilherme José Pimentel Lopes de Oliveira
{"title":"Development and Evaluation of an Automated Histomorphometric Analysis Method for the Assessment of Implant Osseointegration.","authors":"Lucas de Sousa Goulart Pereira, Jovânia Alves Oliveira, Elcio Marcantonio, André Ricardo Backes, Guilherme José Pimentel Lopes de Oliveira","doi":"10.1007/s10278-025-01611-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01611-6","url":null,"abstract":"<p><p>This study evaluated an automated histomorphometric analysis method for assessing implant osseointegration. Sixty-eight histological images of implants installed in the tibiae of 40 rats were analyzed using the manual method and ImageJ software to evaluate the %BIC (bone-implant contact) and %BBT (bone between the implant threads). In the automated histomorphometric analysis method, the similarity of the RGB color channels was analyzed, and the black background region of the image was excluded, to select the region of interest. The contour curve was extracted, excluding points that do not belong to the contact between the implant and the bone, i.e., the image edges. A Gaussian filter was applied to smooth small imperfections in the curve. An analysis of the curve points in relation to their neighboring points was conducted, and the local maximum of the curve was selected, excluding points that were too close together and keeping only those that represent the peaks of the implant curve. The peaks and the corresponding portion of the curve were used to select the region of the original image and calculate the %BIC and %BBT values. A comparison of the results obtained for both parameters in both evaluation methods was then performed using Pearson's correlation test. A strong correlation was observed between the analyses (r = 0.99). Thus, the proposed automated histomorphometric analysis method was shown to be reliable for assessing implant osseointegration.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144736512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring AI-Based System Design for Pixel-Level Protected Health Information Detection in Medical Images. 基于人工智能的医学图像像素级健康信息检测系统设计探索。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01619-y
Tuan Truong, Ivo M Baltruschat, Mark Klemens, Grit Werner, Matthias Lenga
{"title":"Exploring AI-Based System Design for Pixel-Level Protected Health Information Detection in Medical Images.","authors":"Tuan Truong, Ivo M Baltruschat, Mark Klemens, Grit Werner, Matthias Lenga","doi":"10.1007/s10278-025-01619-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01619-y","url":null,"abstract":"<p><p>De-identification of medical images is a critical step to ensure privacy during data sharing in research and clinical settings. The initial step in this process involves detecting Protected Health Information (PHI), which can be found in image metadata or imprinted within image pixels. Despite the importance of such systems, there has been limited evaluation of existing AI-based solutions, creating barriers to the development of reliable and robust tools. In this study, we present an AI-based pipeline for PHI detection, comprising three key modules: text detection, text extraction, and text analysis. We benchmark three models-YOLOv11, EasyOCR, and GPT-4o- across different setups corresponding to these modules, evaluating their performance on two different datasets encompassing multiple imaging modalities and PHI categories. Our findings indicate that the optimal setup involves utilizing dedicated vision and language models for each module, which achieves a commendable balance in performance, latency, and cost associated with the usage of large language models (LLMs). Additionally, we show that the application of LLMs not only involves identifying PHI content but also enhances OCR tasks and facilitates an end-to-end PHI detection pipeline, showcasing promising outcomes through our analysis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D-WDA-PMorph: Efficient 3D MRI/TRUS Prostate Registration using Transformer-CNN Network and Wavelet-3D-Depthwise-Attention. 3D- wda - pmorph:使用Transformer-CNN网络和Wavelet-3D-Depthwise-Attention的高效3D MRI/TRUS前列腺配准。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01615-2
Hanae Mahmoudi, Hiba Ramadan, Jamal Riffi, Hamid Tairi
{"title":"3D-WDA-PMorph: Efficient 3D MRI/TRUS Prostate Registration using Transformer-CNN Network and Wavelet-3D-Depthwise-Attention.","authors":"Hanae Mahmoudi, Hiba Ramadan, Jamal Riffi, Hamid Tairi","doi":"10.1007/s10278-025-01615-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01615-2","url":null,"abstract":"<p><p>Multimodal image registration is crucial in medical imaging, particularly for aligning Magnetic Resonance Imaging (MRI) and Transrectal Ultrasound (TRUS) data, which are widely used in prostate cancer diagnosis and treatment planning. However, this task presents significant challenges due to the inherent differences between these imaging modalities, including variations in resolution, contrast, and noise. Recently, conventional Convolutional Neural Network (CNN)-based registration methods, while effective at extracting local features, often struggle to capture global contextual information and fail to adapt to complex deformations in multimodal data. Conversely, Transformer-based methods excel at capturing long-range dependencies and hierarchical features but face difficulties in integrating fine-grained local details, which are essential for accurate spatial alignment. To address these limitations, we propose a novel 3D image registration framework that combines the strengths of both paradigms. Our method employs a Swin Transformer (ST)-CNN encoder-decoder architecture, with a key innovation focusing on enhancing the skip connection stages. Specifically, we introduce an innovative module named Wavelet-3D-Depthwise-Attention (WDA). The WDA module leverages an attention mechanism that integrates wavelet transforms for multi-scale spatial-frequency representation and 3D-Depthwise convolution to improve computational efficiency and modality fusion. Experimental evaluations on clinical MRI/TRUS datasets confirm that the proposed method achieves a median Dice score of 0.94 and a target registration error of 0.85, indicating an improvement in registration accuracy and robustness over existing state-of-the-art (SOTA) methods. The WDA-enhanced skip connections significantly empower the registration network to preserve critical anatomical details, making our method a promising advancement in prostate multimodal registration. Furthermore, the proposed framework shows strong potential for generalization to other image registration tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Prediction of TMJ Disc Displacement in CBCT Images Using Machine Learning. 基于机器学习的CBCT图像颞下颌关节盘位移自动预测。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01609-0
Hanseung Choi, Kug Jin Jeon, Chena Lee, Yoon Joo Choi, Gyu-Dong Jo, Sang-Sun Han
{"title":"Automatic Prediction of TMJ Disc Displacement in CBCT Images Using Machine Learning.","authors":"Hanseung Choi, Kug Jin Jeon, Chena Lee, Yoon Joo Choi, Gyu-Dong Jo, Sang-Sun Han","doi":"10.1007/s10278-025-01609-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01609-0","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) is the gold standard for diagnosing disc displacement in temporomandibular joint (TMJ) disorders, but its high cost and practical challenges limit its accessibility. This study aimed to develop a machine learning (ML) model that can predict TMJ disc displacement using only cone-beam computed tomography (CBCT)-based radiomics features without MRI. CBCT images of 247 mandibular condyles from 134 patients who also underwent MRI scans were analyzed. To conduct three experiments based on the classification of various patient groups, we trained two ML models, random forest (RF) and extreme gradient boosting (XGBoost). Experiment 1 classified the data into three groups: Normal, disc displacement with reduction (DDWR), and disc displacement without reduction (DDWOR). Experiment 2 classified Normal versus disc displacement group (DDWR and DDWOR), and Experiment 3 classified Normal and DDWR versus DDWOR group. The RF model showed higher performance than XGBoost across all three experiments, and in particular, Experiment 3, which differentiated DDWOR from other conditions, achieved the highest accuracy with an area under the receiver operating characteristic curve (AUC) values of 0.86 (RF) and 0.85 (XGBoost). Experiment 2 followed with AUC values of 0.76 (RF) and 0.75 (XGBoost), while Experiment 1, which classified all three groups, had the lowest accuracy of 0.63 (RF) and 0.59 (XGBoost). The RF model, utilizing radiomics features from CBCT images, demonstrated potential as an assistant tool for predicting DDWOR, which requires the most careful management.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Generation of Structured Lymphoma Progression Reports from Cross-sectional Imaging: A Comparative Analysis of Llama 3.3 and Llama 4. 基于横断成像的结构化淋巴瘤进展报告的隐私保护生成:羊驼3.3和羊驼4的比较分析。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01618-z
Philipp Prucker, Keno K Bressem, Su Hwan Kim, Dominik Weller, Avan Kader, Felix J Dorfner, Sebastian Ziegelmayer, Markus M Graf, Tristan Lemke, Florian Gassert, Elif Can, Aymen Meddeb, Daniel Truhn, Martin Hadamitzky, Marcus R Makowski, Lisa C Adams, Felix Busch
{"title":"Privacy-Preserving Generation of Structured Lymphoma Progression Reports from Cross-sectional Imaging: A Comparative Analysis of Llama 3.3 and Llama 4.","authors":"Philipp Prucker, Keno K Bressem, Su Hwan Kim, Dominik Weller, Avan Kader, Felix J Dorfner, Sebastian Ziegelmayer, Markus M Graf, Tristan Lemke, Florian Gassert, Elif Can, Aymen Meddeb, Daniel Truhn, Martin Hadamitzky, Marcus R Makowski, Lisa C Adams, Felix Busch","doi":"10.1007/s10278-025-01618-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01618-z","url":null,"abstract":"<p><p>Efficient processing of radiology reports for monitoring disease progression is crucial in oncology. Although large language models (LLMs) show promise in extracting structured information from medical reports, privacy concerns limit their clinical implementation. This study evaluates the feasibility and accuracy of two of the most recent Llama models for generating structured lymphoma progression reports from cross-sectional imaging data in a privacy-preserving, real-world clinical setting. This single-center, retrospective study included adult lymphoma patients who underwent cross-sectional imaging and treatment between July 2023 and July 2024. We established a chain-of-thought prompting strategy to leverage the locally deployed Llama-3.3-70B-Instruct and Llama-4-Scout-17B-16E-Instruct models to generate lymphoma disease progression reports across three iterations. Two radiologists independently scored nodal and extranodal involvement, as well as Lugano staging and treatment response classifications. For each LLM and task, we calculated the F1 score, accuracy, recall, precision, and specificity per label, as well as the case-weighted average with 95% confidence intervals (CIs). Both LLMs correctly implemented the template structure for all 65 patients included in this study. Llama-4-Scout-17B-16E-Instruct demonstrated significantly greater accuracy in extracting nodal and extranodal involvement information (nodal: 0.99 [95% CI = 0.98-0.99] vs. 0.97 [95% CI = 0.95-0.96], p < 0.001; extranodal: 0.99 [95% CI = 0.99-1.00] vs. 0.99 [95% CI = 0.98-0.99], p = 0.013). This difference was more pronounced when predicting Lugano stage and treatment response (stage: 0.85 [95% CI = 0.79-0.89] vs. 0.60 [95% CI = 0.53-0.67], p < 0.001; treatment response: 0.88 [95% CI = 0.83-0.92] vs. 0.65 [95% CI = 0.58-0.71], p < 0.001). Neither model produced hallucinations of newly involved nodal or extranodal sites. The highest relative error rates were found when interpreting the level of disease after treatment. In conclusion, privacy-preserving LLMs can effectively extract clinical information from lymphoma imaging reports. While they excel at data extraction, they are limited in their ability to generate new clinical inferences from the extracted information. Our findings suggest their potential utility in streamlining documentation and highlight areas requiring optimization before clinical implementation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis. XVertNet:椎体结构的无监督对比度增强与动态自调整指导和多阶段分析。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01592-6
Ella Eidlin, Assaf Hoogi, Hila Rozen, Mohammad Badarne, Nathan S Netanyahu
{"title":"XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis.","authors":"Ella Eidlin, Assaf Hoogi, Hila Rozen, Mohammad Badarne, Nathan S Netanyahu","doi":"10.1007/s10278-025-01592-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01592-6","url":null,"abstract":"<p><p>Chest X-ray is one of the main diagnostic tools in emergency medicine, yet its limited ability to capture fine anatomical details can result in missed or delayed diagnoses. To address this, we introduce XVertNet, a novel deep-learning framework designed to enhance vertebral structure visualization in X-ray images significantly. Our framework introduces two key innovations: (1) an unsupervised learning architecture that eliminates reliance on manually labeled training data-a persistent bottleneck in medical imaging, and (2) a dynamic self-tuned internal guidance mechanism featuring an adaptive feedback loop for real-time image optimization. Extensive validation across four major public datasets revealed that XVertNet outperforms state-of-the-art enhancement methods, as demonstrated by improvements in evaluation measures such as entropy, the Tenengrad criterion, LPC-SI, TMQI, and PIQE. Furthermore, clinical validation conducted by two board-certified clinicians confirmed that the enhanced images enabled more sensitive examination of vertebral structural changes. The unsupervised nature of XVertNet facilitates immediate clinical deployment without requiring additional training overhead. This innovation represents a transformative advancement in emergency radiology, providing a scalable and time-efficient solution to enhance diagnostic accuracy in high-pressure clinical environments.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could a New Method of Acromiohumeral Distance Measurement Emerge? Artificial Intelligence vs. Physician. 一种新的肩肱距离测量方法会出现吗?人工智能vs.医生。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01614-3
Burak Tayyip Dede, İsa Çakar, Muhammed Oğuz, Bülent Alyanak, Fatih Bağcıer
{"title":"Could a New Method of Acromiohumeral Distance Measurement Emerge? Artificial Intelligence vs. Physician.","authors":"Burak Tayyip Dede, İsa Çakar, Muhammed Oğuz, Bülent Alyanak, Fatih Bağcıer","doi":"10.1007/s10278-025-01614-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01614-3","url":null,"abstract":"<p><p>The aim of this study was to evaluate the reliability of ChatGPT-4 measurement of acromiohumeral distance (AHD), a popular assessment in patients with shoulder pain. In this retrospective study, 71 registered shoulder magnetic resonance imaging (MRI) scans were included. AHD measurements were performed on a coronal oblique T1 sequence with a clear view of the acromion and humerus. Measurements were performed by an experienced radiologist twice at 3-day intervals and by ChatGPT-4 twice at 3-day intervals in different sessions. The first, second, and mean values of AHD measured by the physician were 7.6 ± 1.7, 7.5 ± 1.6, and 7.6 ± 1.7, respectively. The first, second, and mean values measured by ChatGPT-4 were 6.7 ± 0.8, 7.3 ± 1.1, and 7.1 ± 0.8, respectively. There was a significant difference between the physician and ChatGPT-4 between the first and mean measurements (p < 0.0001 and p = 0.009, respectively). However, there was no significant difference between the second measurements (p = 0.220). Intrarater reliability for the physician was excellent (ICC = 0.99); intrarater reliability for ChatGPT-4 was poor (ICC = 0.41). Interrater reliability was poor (ICC = 0.45). In conclusion, this study demonstrated that the reliability of ChatGPT-4 in AHD measurements is inferior to that of an experienced radiologist. This study may help improve the possible future contribution of large language models to medical science.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Phase Deep Learning Approach for Architectural Distortion Detection in Mammograms. 乳房x光片结构畸变检测的两阶段深度学习方法。
Journal of imaging informatics in medicine Pub Date : 2025-07-25 DOI: 10.1007/s10278-025-01613-4
Sameh E Ibrahim, Mai S Mabrouk, Wael A Mohamed, Ahmed F Elnokrashy
{"title":"A Two-Phase Deep Learning Approach for Architectural Distortion Detection in Mammograms.","authors":"Sameh E Ibrahim, Mai S Mabrouk, Wael A Mohamed, Ahmed F Elnokrashy","doi":"10.1007/s10278-025-01613-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01613-4","url":null,"abstract":"<p><p>Breast cancer remains a global health challenge, ranking as a leading cause of mortality among women worldwide, emphasizing the need for early detection to improve treatment outcomes. Architectural distortion (AD), characterized by subtle disruptions in breast tissue patterns, is one of the earliest and most challenging signs of breast cancer to detect, often manifesting up to 2 years before other signs and providing a critical window for intervention. To address the limitations of traditional AD segmentation methods, this study introduces an advanced deep learning approach to automate and improve AD segmentation and classification on mammograms, significantly reducing radiologists' workload and enhancing diagnostic accuracy. The process began with implementing U-Net++ as a semantic segmentation model for pixel-level classification. Next, a standalone Mask R-CNN was applied, which incorporated instance segmentation for more precise detection of AD regions. Finally, the study introduced a two-phase pipeline that combines Mask R-CNN for segmentation with a ResNet-18 classification model to refine predictions and reduce false positives. Key enhancements, such as smooth L1 for bounding box regression loss and binary cross entropy with Dice loss for mask prediction, significantly improved segmentation metrics. The integrated approach achieved remarkable results with a segmentation accuracy of 0.852, a classification accuracy of 0.915, and a mean average precision (mAP) of 0.894. Furthermore, the sensitivity of our integrated approach was 92.4%. This enhances breast cancer screening and diagnosis, with the results highlighting its potential to improve patient outcomes through timely diagnosis and effective treatment planning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AgentMRI: A Vison Language Model-Powered AI System for Self-regulating MRI Reconstruction with Multiple Degradations. AgentMRI:一种基于视觉语言模型的人工智能系统,用于自调节多重退化的MRI重建。
Journal of imaging informatics in medicine Pub Date : 2025-07-22 DOI: 10.1007/s10278-025-01617-0
Gulfam Ahmed Sajua, Marjan Akhib, Yuchou Chang
{"title":"AgentMRI: A Vison Language Model-Powered AI System for Self-regulating MRI Reconstruction with Multiple Degradations.","authors":"Gulfam Ahmed Sajua, Marjan Akhib, Yuchou Chang","doi":"10.1007/s10278-025-01617-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01617-0","url":null,"abstract":"<p><p>Artificial intelligence (AI)-driven autonomous agents are transforming multiple domains by integrating reasoning, decision-making, and task execution into a unified framework. In medical imaging, such agents have the potential to change workflows by reducing human intervention and optimizing image quality. In this paper, we introduce the AgentMRI. It is an AI-driven system that leverages vision language models (VLMs) for fully autonomous magnetic resonance imaging (MRI) reconstruction in the presence of multiple degradations. Unlike traditional MRI correction or reconstruction methods, AgentMRI does not rely on manual intervention for post-processing or does not rely on fixed correction models. Instead, it dynamically detects MRI corruption and then automatically selects the best correction model for image reconstruction. The framework uses a multi-query VLM strategy to ensure robust corruption detection through consensus-based decision-making and confidence-weighted inference. AgentMRI automatically chooses deep learning models that include MRI reconstruction, motion correction, and denoising models. We evaluated AgentMRI in both zero-shot and fine-tuned settings. Experimental results on a comprehensive brain MRI dataset demonstrate that AgentMRI achieves an average of 73.6% accuracy in zero-shot and 95.1% accuracy for fine-tuned settings. Experiments show that it accurately executes the reconstruction process without human intervention. AgentMRI eliminates manual intervention and introduces a scalable and multimodal AI framework for autonomous MRI processing. This work may build a significant step toward fully autonomous and intelligent MR image reconstruction systems.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144693061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic Performance of Artificial Intelligence in Detecting and Distinguishing Pancreatic Ductal Adenocarcinoma via Computed Tomography: A Systematic Review and Meta-Analysis. 人工智能在通过计算机断层扫描检测和区分胰腺导管腺癌中的诊断性能:系统综述和荟萃分析。
Journal of imaging informatics in medicine Pub Date : 2025-07-18 DOI: 10.1007/s10278-025-01607-2
Hamid Harandi, Mahdi Gouravani, Sogol Alikarami, Mohammad Shahrabi Farahani, Mohammad Ghavam, Soheil Mohammadi, Mohammad Amin Salehi, Sean Reynolds, Fatemeh Dehghani Firouzabadi, Fahimul Huda
{"title":"Diagnostic Performance of Artificial Intelligence in Detecting and Distinguishing Pancreatic Ductal Adenocarcinoma via Computed Tomography: A Systematic Review and Meta-Analysis.","authors":"Hamid Harandi, Mahdi Gouravani, Sogol Alikarami, Mohammad Shahrabi Farahani, Mohammad Ghavam, Soheil Mohammadi, Mohammad Amin Salehi, Sean Reynolds, Fatemeh Dehghani Firouzabadi, Fahimul Huda","doi":"10.1007/s10278-025-01607-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01607-2","url":null,"abstract":"<p><p>We conducted a systematic review and meta-analysis in diagnostic performance of studies that tried to use artificial intelligence (AI) algorithms in detecting pancreatic ductal adenocarcinoma (PDAC) and distinguishing them from other types of pancreatic lesions. We systematically searched for studies on pancreatic lesions and AI from January 2014 to May 2024. Data were extracted and a meta-analysis was performed using contingency tables and a random-effects model to calculate pooled sensitivity and specificity. Quality assessment was done using modified TRIPOD and PROBAST tools. We included 26 studies in this systematic review, with 22 studies chosen for meta-analysis. The evaluation of AI algorithms' performance in internal validation exhibited a pooled sensitivity of 93% (95% confidence interval [CI], 90 to 95) and specificity of 95% (95% CI, 92 to 97). Additionally, externally validated AI algorithms demonstrated a combined sensitivity of 89% (95% CI, 85 to 92) and specificity of 91% (95% CI, 85 to 95). Subgroup analysis indicated that diagnostic performance differed by comparator group, image contrast, segmentation technique, and algorithm type, with contrast-enhanced imaging and specific AI models (e.g., random forest for sensitivity and CNN for specificity) demonstrating superior accuracy. Although the potential biases should be further addressed, results of this systematic review and meta-analysis showed that AI models have the potential to be incorporated in clinical settings for the detection of smaller tumors and underpinning early signs of PDAC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信