Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Deep Learning-Based Prediction of Post-treatment Survival in Hepatocellular Carcinoma Patients Using Pre-treatment CT Images and Clinical Data. 利用治疗前 CT 图像和临床数据,基于深度学习预测肝细胞癌患者的治疗后生存期。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-15 DOI: 10.1007/s10278-024-01227-2
Kyung Hwa Lee, Jungwook Lee, Gwang Hyeon Choi, Jihye Yun, Jiseon Kang, Jonggi Choi, Kang Mo Kim, Namkug Kim
{"title":"Deep Learning-Based Prediction of Post-treatment Survival in Hepatocellular Carcinoma Patients Using Pre-treatment CT Images and Clinical Data.","authors":"Kyung Hwa Lee, Jungwook Lee, Gwang Hyeon Choi, Jihye Yun, Jiseon Kang, Jonggi Choi, Kang Mo Kim, Namkug Kim","doi":"10.1007/s10278-024-01227-2","DOIUrl":"10.1007/s10278-024-01227-2","url":null,"abstract":"<p><p>The objective of this study was to develop and evaluate a model for predicting post-treatment survival in hepatocellular carcinoma (HCC) patients using their CT images and clinical information, including various treatment information. We collected pre-treatment contrast-enhanced CT images and clinical information including patient-related factors, initial treatment options, and survival status from 692 patients. The patient cohort was divided into a training cohort (n = 507), a testing cohort (n = 146), and an external CT cohort (n = 39), which included patients who underwent CT scans at other institutions. After model training using fivefold cross-validation, model validation was performed on both the testing cohort and the external CT cohort. Our cascaded model employed a 3D convolutional neural network (CNN) to extract features from CT images and derive final survival probabilities. These probabilities were obtained by concatenating previously predicted probabilities for each interval with the patient-related factors and treatment options. We utilized two consecutive fully connected layers for this process, resulting in a number of final outputs corresponding to the number of time intervals, with values representing conditional survival probabilities for each interval. Performance was assessed using the concordance index (C-index), the mean cumulative/dynamic area under the receiver operating characteristics curve (mC/D AUC), and the mean Brier score (mBS), calculated every 3 months. Through an ablation study, we found that using DenseNet-121 as the backbone network and setting the prediction interval to 6 months optimized the model's performance. The integration of multimodal data resulted in superior predictive capabilities compared to models using only CT images or clinical information (C index 0.824 [95% CI 0.822-0.826], mC/D AUC 0.893 [95% CI 0.891-0.895], and mBS 0.121 [95% CI 0.120-0.123] for internal test cohort; C index 0.750 [95% CI 0.747-0.753], mC/D AUC 0.819 [95% CI 0.816-0.823], and mBS 0.159 [95% CI 0.158-0.161] for external CT cohort, respectively). Our CNN-based discrete-time survival prediction model with CT images and clinical information demonstrated promising results in predicting post-treatment survival of patients with HCC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1212-1223"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950573/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports. 从非结构化放射学报告中提取进行性骨转移的微调大语言模型
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-26 DOI: 10.1007/s10278-024-01242-3
Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe
{"title":"The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports.","authors":"Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe","doi":"10.1007/s10278-024-01242-3","DOIUrl":"10.1007/s10278-024-01242-3","url":null,"abstract":"<p><p>Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with \"metastasis\" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"865-872"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning. 利用深度多任务学习改进腕骨X光片的自动质量控制。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-26 DOI: 10.1007/s10278-024-01220-9
Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi
{"title":"Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning.","authors":"Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi","doi":"10.1007/s10278-024-01220-9","DOIUrl":"10.1007/s10278-024-01220-9","url":null,"abstract":"<p><p>Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"838-849"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images. 基于计算机断层扫描图像的肝细胞癌和转移瘤自动诊断。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-09-03 DOI: 10.1007/s10278-024-01192-w
Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin
{"title":"Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images.","authors":"Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin","doi":"10.1007/s10278-024-01192-w","DOIUrl":"10.1007/s10278-024-01192-w","url":null,"abstract":"<p><p>Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of <math><mrow><mn>87.65</mn> <mo>%</mo></mrow> </math> in segmentation and a mean accuracy of <math><mrow><mn>93.97</mn> <mo>%</mo></mrow> </math> in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"873-886"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950545/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformers in Medical Imaging: a Comprehensive Review of Advancements and Applications Across Multiple Diseases. 医学成像中的视觉变压器:在多种疾病中的进展和应用的综合综述。
Journal of imaging informatics in medicine Pub Date : 2025-03-31 DOI: 10.1007/s10278-025-01481-y
Sanad Aburass, Osama Dorgham, Jamil Al Shaqsi, Maha Abu Rumman, Omar Al-Kadi
{"title":"Vision Transformers in Medical Imaging: a Comprehensive Review of Advancements and Applications Across Multiple Diseases.","authors":"Sanad Aburass, Osama Dorgham, Jamil Al Shaqsi, Maha Abu Rumman, Omar Al-Kadi","doi":"10.1007/s10278-025-01481-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01481-y","url":null,"abstract":"<p><p>The rapid advancement of artificial intelligence techniques, particularly deep learning, has transformed medical imaging. This paper presents a comprehensive review of recent research that leverage vision transformer (ViT) models for medical image classification across various disciplines. The medical fields of focus include breast cancer, skin lesions, magnetic resonance imaging brain tumors, lung diseases, retinal and eye analysis, COVID-19, heart diseases, colon cancer, brain disorders, diabetic retinopathy, skin diseases, kidney diseases, lymph node diseases, and bone analysis. Each work is critically analyzed and interpreted with respect to its performance, data preprocessing methodologies, model architecture, transfer learning techniques, model interpretability, and identified challenges. Our findings suggest that ViT shows promising results in the medical imaging domain, often outperforming traditional convolutional neural networks (CNN). A comprehensive overview is presented in the form of figures and tables summarizing the key findings from each field. This paper provides critical insights into the current state of medical image classification using ViT and highlights potential future directions for this rapidly evolving research area.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Assisted Diagnosis of Placenta Accreta Spectrum Using the DenseNet-121 Model: A Multicenter, Retrospective Study. 使用DenseNet-121模型的深度学习辅助诊断胎盘增生谱:一项多中心回顾性研究。
Journal of imaging informatics in medicine Pub Date : 2025-03-24 DOI: 10.1007/s10278-025-01475-w
Yurui Hu, Tianyu Liu, Shutong Pang, Xiao Ling, Zhanqiu Wang, Wenfei Li
{"title":"Deep Learning-Assisted Diagnosis of Placenta Accreta Spectrum Using the DenseNet-121 Model: A Multicenter, Retrospective Study.","authors":"Yurui Hu, Tianyu Liu, Shutong Pang, Xiao Ling, Zhanqiu Wang, Wenfei Li","doi":"10.1007/s10278-025-01475-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01475-w","url":null,"abstract":"<p><p>To explore the diagnostic value of deep learning (DL) imaging based on MRI in predicting placenta accreta spectrum (PAS) in high-risk pregnant women. A total of 263 patients with suspected placenta accreta from Institution I and Institution II were retrospectively analyzed and divided into training (n = 170) and external verification sets (n = 93). Through imaging acquisition, feature extraction, and radiomic data processing, 15 radiomic features were used to train support vector machine (SVM), K-nearest neighbor (KNN), random forest (RF), light gradient boosting machine (LGBM), and DL models to predict PAS. The diagnostic performances of the models were evaluated in the training set using the area under the curve (AUC) and accuracy and further validated in the external verification set. Univariate and multivariate logistic regression analysis revealed that a history of cesarean section, placental thickness, and placenta previa were independent clinical risk factors for predicting PAS. Among machine learning (ML) models, SVM demonstrated the highest diagnostic power (AUC = 0.944), with an accuracy of 0.876. The diagnostic efficiency of the DL model was significantly better than that of other models, with an AUC of 0.956 (95% CI 0.931-0.981) in the training set and 0.863 (95% CI 0.816-0.910) in the external verification set. In terms of specificity, the DL model outperformed the ML models. The DL model based on MRI may demonstrate better performance in the diagnosis of PAS compared to traditional clinical models or ML radiomics models, as further confirmed by the external verification set.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Learning Framework for Brain Tumor Detection Using MRI Images in Non-IID Data Distributions. 使用非iid数据分布的MRI图像进行脑肿瘤检测的联邦学习框架。
Journal of imaging informatics in medicine Pub Date : 2025-03-24 DOI: 10.1007/s10278-025-01484-9
M D Zahin Muntaqim, Tangin Amir Smrity
{"title":"Federated Learning Framework for Brain Tumor Detection Using MRI Images in Non-IID Data Distributions.","authors":"M D Zahin Muntaqim, Tangin Amir Smrity","doi":"10.1007/s10278-025-01484-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01484-9","url":null,"abstract":"<p><p>Brain tumor detection from medical images, especially magnetic resonance imaging (MRI) scans, is a critical task in early diagnosis and treatment planning. Traditional machine learning approaches often rely on centralized data, raising concerns about data privacy, security, and the difficulty of obtaining large annotated datasets. Federated learning (FL) has emerged as a promising solution for training models across decentralized devices while maintaining data privacy. However, challenges remain in dealing with non-IID (independent and identically distributed) data, which is common in real-world scenarios. In this research, we used a client-server-based federated learning framework for brain tumor detection using MRI images, leveraging VGG19 as the backbone model. To improve clinical relevance and model interpretability, we have included explainability techniques, particularly Grad-CAM. We trained our model across four clients with non-IID data distribution to simulate real-world conditions. For performance evaluation, we used a centralized test dataset, consisting of 20% of the original data, with the test set used collectively for evaluating model performance after completing federated learning rounds. Using a separate test dataset ensures that all models are evaluated on the same data, making comparisons fair. Since the test dataset is not part of the FL training process, it does not violate the privacy-preserving nature of FL. The experimental results demonstrate that the VGG19 model achieves a high test accuracy of 97.18% (FedAVG), 98.24% (FedProx), and 98.45% (Scaffold) than other state-of-the-art models, showcasing the effectiveness of federated learning in handling distributed and non-IID data. Our findings highlight the potential of federated learning to address privacy concerns in medical image analysis while maintaining high performance even in non-IID settings. This approach provides a promising direction for future research in privacy-preserving AI for healthcare applications.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly Accurate Occupational Pneumoconiosis Staging via Dark Channel Prior-Inspired Lesion Area Enhancement. 通过暗通道预先激发病变区域增强,高度准确的职业性尘肺分期。
Journal of imaging informatics in medicine Pub Date : 2025-03-24 DOI: 10.1007/s10278-025-01472-z
Weiling Li, Tianci Zhou, Ani Dong, Liang Xiong, Qianhao Luo, Ling Mou, Xin Liu
{"title":"Highly Accurate Occupational Pneumoconiosis Staging via Dark Channel Prior-Inspired Lesion Area Enhancement.","authors":"Weiling Li, Tianci Zhou, Ani Dong, Liang Xiong, Qianhao Luo, Ling Mou, Xin Liu","doi":"10.1007/s10278-025-01472-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01472-z","url":null,"abstract":"<p><p>Occupational pneumoconiosis (OP) staging is the core for OP diagnosis. It is essentially an image classification task concerning patients' lung condition by analyzing their chest X-ray. To perform artificial intelligence-assisted OP staging, the chest X-ray film representational learning and classification are commonly adopted, where a convolutional neural network (CNN) has proven to be very efficient. However, unlike commonly encountered image classification tasks, the OP staging relies heavily on the profusion level of opacities, i.e., the OP lesion reflection on the X-ray film. The OP lesions overlap with other tissues in the chest, making the opacities hard to be represented by a standard CNN and thus leading to inaccurate staging results. Inspired by the similarity between OP lesion and haze, i.e., they are both read like dusts in a space, this study proposes a dark channel prior-inspired lesion area enhancement (DCP-LAE)-based OP staging method with high accuracy. Its ideas are twofold: a) enhancing the OP lesion areas with an OP X-ray film restore method inspired by the dark channel prior-based de-hazing method, and b) implementing the multiple feature fusion via a bi-branch network structure to obtain high staging accuracy. Experimental results from real OP cases collected in hospitals demonstrate that the DCP-LAE-based OP staging model achieves an accuracy of 83.8%, surpassing existing state-of-the-art models.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study of Why We Need to Reassess Full Reference Image Quality Assessment with Medical Images. 为什么我们需要重新评估医学图像的全参考图像质量评估的研究。
Journal of imaging informatics in medicine Pub Date : 2025-03-24 DOI: 10.1007/s10278-025-01462-1
Anna Breger, Ander Biguri, Malena Sabaté Landman, Ian Selby, Nicole Amberg, Elisabeth Brunner, Janek Gröhl, Sepideh Hatamikia, Clemens Karner, Lipeng Ning, Sören Dittmer, Michael Roberts, Carola-Bibiane Schönlieb
{"title":"A Study of Why We Need to Reassess Full Reference Image Quality Assessment with Medical Images.","authors":"Anna Breger, Ander Biguri, Malena Sabaté Landman, Ian Selby, Nicole Amberg, Elisabeth Brunner, Janek Gröhl, Sepideh Hatamikia, Clemens Karner, Lipeng Ning, Sören Dittmer, Michael Roberts, Carola-Bibiane Schönlieb","doi":"10.1007/s10278-025-01462-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01462-1","url":null,"abstract":"<p><p>Image quality assessment (IQA) is indispensable in clinical practice to ensure high standards, as well as in the development stage of machine learning algorithms that operate on medical images. The popular full reference (FR) IQA measures PSNR and SSIM are known and tested for working successfully in many natural imaging tasks, but discrepancies in medical scenarios have been reported in the literature, highlighting the gap between development and actual clinical application. Such inconsistencies are not surprising, as medical images have very different properties than natural images, and PSNR and SSIM have neither been targeted nor properly tested for medical images. This may cause unforeseen problems in clinical applications due to wrong judgement of novel methods. This paper provides a structured and comprehensive overview of examples where PSNR and SSIM prove to be unsuitable for the assessment of novel algorithms using different kinds of medical images, including real-world MRI, CT, OCT, X-Ray, digital pathology and photoacoustic imaging data. Therefore, improvement is urgently needed in particular in this era of AI to increase reliability and explainability in machine learning for medical imaging and beyond. Lastly, we will provide ideas for future research as well as suggest guidelines for the usage of FR-IQA measures applied to medical images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of a Retrieval-Augmented Generation-Powered Chatbot for Pre-CT Informed Consent: a Prospective Comparative Study. 检索增强生成动力聊天机器人对ct前知情同意的评估:一项前瞻性比较研究。
Journal of imaging informatics in medicine Pub Date : 2025-03-21 DOI: 10.1007/s10278-025-01483-w
Felix Busch, Lukas Kaibel, Hai Nguyen, Tristan Lemke, Sebastian Ziegelmayer, Markus Graf, Alexander W Marka, Lukas Endrös, Philipp Prucker, Daniel Spitzl, Markus Mergen, Marcus R Makowski, Keno K Bressem, Sebastian Petzoldt, Lisa C Adams, Tim Landgraf
{"title":"Evaluation of a Retrieval-Augmented Generation-Powered Chatbot for Pre-CT Informed Consent: a Prospective Comparative Study.","authors":"Felix Busch, Lukas Kaibel, Hai Nguyen, Tristan Lemke, Sebastian Ziegelmayer, Markus Graf, Alexander W Marka, Lukas Endrös, Philipp Prucker, Daniel Spitzl, Markus Mergen, Marcus R Makowski, Keno K Bressem, Sebastian Petzoldt, Lisa C Adams, Tim Landgraf","doi":"10.1007/s10278-025-01483-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01483-w","url":null,"abstract":"<p><p>This study aims to investigate the feasibility, usability, and effectiveness of a Retrieval-Augmented Generation (RAG)-powered Patient Information Assistant (PIA) chatbot for pre-CT information counseling compared to the standard physician consultation and informed consent process. This prospective comparative study included 86 patients scheduled for CT imaging between November and December 2024. Patients were randomly assigned to either the PIA group (n = 43), who received pre-CT information via the PIA chat app, or the control group (n = 43), with standard doctor-led consultation. Patient satisfaction, information clarity and comprehension, and concerns were assessed using six ten-point Likert-scale questions after information counseling with PIA or the doctor's consultation. Additionally, consultation duration was measured, and PIA group patients were asked about their preference for pre-CT consultation, while two radiologists rated each PIA chat in five categories. Both groups reported similarly high ratings for information clarity (PIA: 8.64 ± 1.69; control: 8.86 ± 1.28; p = 0.82) and overall comprehension (PIA: 8.81 ± 1.40; control: 8.93 ± 1.61; p = 0.35). However, the doctor consultation group showed greater effectiveness in alleviating patient concerns (8.30 ± 2.63 versus 6.46 ± 3.29; p = 0.003). The PIA group demonstrated significantly shorter subsequent consultation times (median: 120 s [interquartile range (IQR): 100-140] versus 195 s [IQR: 170-220]; p = 0.04). Both radiologists rated overall quality, scientific and clinical evidence, clinical usefulness and relevance, consistency, and up-to-dateness of PIA high. The RAG-powered PIA effectively provided pre-CT information while significantly reducing physician consultation time. While both methods achieved comparable patient satisfaction and comprehension, physicians were more effective at addressing worries or concerns regarding the examination.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143677437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信