Journal of imaging informatics in medicine最新文献

筛选
英文 中文
A Deep-Learning-Enabled Electrocardiogram and Chest X-Ray for Detecting Pulmonary Arterial Hypertension. 用于检测肺动脉高压的深度学习心电图和胸部 X 光片。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-13 DOI: 10.1007/s10278-024-01225-4
Pang-Yen Liu, Shi-Chue Hsing, Dung-Jang Tsai, Chin Lin, Chin-Sheng Lin, Chih-Hung Wang, Wen-Hui Fang
{"title":"A Deep-Learning-Enabled Electrocardiogram and Chest X-Ray for Detecting Pulmonary Arterial Hypertension.","authors":"Pang-Yen Liu, Shi-Chue Hsing, Dung-Jang Tsai, Chin Lin, Chin-Sheng Lin, Chih-Hung Wang, Wen-Hui Fang","doi":"10.1007/s10278-024-01225-4","DOIUrl":"10.1007/s10278-024-01225-4","url":null,"abstract":"<p><p>The diagnosis and treatment of pulmonary hypertension have changed dramatically through the re-defined diagnostic criteria and advanced drug development in the past decade. The application of Artificial Intelligence for the detection of elevated pulmonary arterial pressure (ePAP) was reported recently. Artificial Intelligence (AI) has demonstrated the capability to identify ePAP and its association with hospitalization due to heart failure when analyzing chest X-rays (CXR). An AI model based on electrocardiograms (ECG) has shown promise in not only detecting ePAP but also in predicting future risks related to cardiovascular mortality. We aimed to develop an AI model integrating ECG and CXR to detect ePAP and evaluate their performance. We developed a deep-learning model (DLM) using paired ECG and CXR to detect ePAP (systolic pulmonary artery pressure > 50 mmHg in transthoracic echocardiography). This model was further validated in a community hospital. Additionally, our DLM was evaluated for its ability to predict future occurrences of left ventricular dysfunction (LVD, ejection fraction < 35%) and cardiovascular mortality. The AUCs for detecting ePAP were as follows: 0.8261 with ECG (sensitivity 76.6%, specificity 74.5%), 0.8525 with CXR (sensitivity 82.8%, specificity 72.7%), and 0.8644 with a combination of both (sensitivity 78.6%, specificity 79.2%) in the internal dataset. In the external validation dataset, the AUCs for ePAP detection were 0.8348 with ECG, 0.8605 with CXR, and 0.8734 with the combination. Furthermore, using the combination of ECGs and CXR, the negative predictive value (NPV) was 98% in the internal dataset and 98.1% in the external dataset. Patients with ePAP detected by the DLM using combination had a higher risk of new-onset LVD with a hazard ratio (HR) of 4.51 (95% CI: 3.54-5.76) in the internal dataset and cardiovascular mortality with a HR of 6.08 (95% CI: 4.66-7.95). Similar results were seen in the external validation dataset. The DLM, integrating ECG and CXR, effectively detected ePAP with a strong NPV and forecasted future risks of developing LVD and cardiovascular mortality. This model has the potential to expedite the early identification of pulmonary hypertension in patients, prompting further evaluation through echocardiography and, when necessary, right heart catheterization (RHC), potentially resulting in enhanced cardiovascular outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"747-756"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Revisions to Insights: Converting Radiology Report Revisions into Actionable Educational Feedback Using Generative AI Models. 从修订到洞察:利用生成式人工智能模型将放射学报告修订版转化为可操作的教育反馈。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-19 DOI: 10.1007/s10278-024-01233-4
Shawn Lyo, Suyash Mohan, Alvand Hassankhani, Abass Noor, Farouk Dako, Tessa Cook
{"title":"From Revisions to Insights: Converting Radiology Report Revisions into Actionable Educational Feedback Using Generative AI Models.","authors":"Shawn Lyo, Suyash Mohan, Alvand Hassankhani, Abass Noor, Farouk Dako, Tessa Cook","doi":"10.1007/s10278-024-01233-4","DOIUrl":"10.1007/s10278-024-01233-4","url":null,"abstract":"<p><p>Expert feedback on trainees' preliminary reports is crucial for radiologic training, but real-time feedback can be challenging due to non-contemporaneous, remote reading and increasing imaging volumes. Trainee report revisions contain valuable educational feedback, but synthesizing data from raw revisions is challenging. Generative AI models can potentially analyze these revisions and provide structured, actionable feedback. This study used the OpenAI GPT-4 Turbo API to analyze paired synthesized and open-source analogs of preliminary and finalized reports, identify discrepancies, categorize their severity and type, and suggest review topics. Expert radiologists reviewed the output by grading discrepancies, evaluating the severity and category accuracy, and suggested review topic relevance. The reproducibility of discrepancy detection and maximal discrepancy severity was also examined. The model exhibited high sensitivity, detecting significantly more discrepancies than radiologists (W = 19.0, p < 0.001) with a strong positive correlation (r = 0.778, p < 0.001). Interrater reliability for severity and type were fair (Fleiss' kappa = 0.346 and 0.340, respectively; weighted kappa = 0.622 for severity). The LLM achieved a weighted F1 score of 0.66 for severity and 0.64 for type. Generated teaching points were considered relevant in ~ 85% of cases, and relevance correlated with the maximal discrepancy severity (Spearman ρ = 0.76, p < 0.001). The reproducibility was moderate to good (ICC (2,1) = 0.690) for the number of discrepancies and substantial for maximal discrepancy severity (Fleiss' kappa = 0.718; weighted kappa = 0.94). Generative AI models can effectively identify discrepancies in report revisions and generate relevant educational feedback, offering promise for enhancing radiology training.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1265-1279"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Usefulness of Low-Kiloelectron Volt Virtual Monochromatic Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction Technique in Improving the Delineation of Pancreatic Ductal Adenocarcinoma. 低千兆电子伏特虚拟单色对比度增强计算机断层扫描与深度学习图像重建技术在改善胰腺导管腺癌分界中的应用。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-13 DOI: 10.1007/s10278-024-01214-7
Yasutaka Ichikawa, Yoshinori Kanii, Akio Yamazaki, Mai Kobayashi, Kensuke Domae, Motonori Nagata, Hajime Sakuma
{"title":"The Usefulness of Low-Kiloelectron Volt Virtual Monochromatic Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction Technique in Improving the Delineation of Pancreatic Ductal Adenocarcinoma.","authors":"Yasutaka Ichikawa, Yoshinori Kanii, Akio Yamazaki, Mai Kobayashi, Kensuke Domae, Motonori Nagata, Hajime Sakuma","doi":"10.1007/s10278-024-01214-7","DOIUrl":"10.1007/s10278-024-01214-7","url":null,"abstract":"<p><p>To evaluate the usefulness of low-keV multiphasic computed tomography (CT) with deep learning image reconstruction (DLIR) in improving the delineation of pancreatic ductal adenocarcinoma (PDAC) compared to conventional hybrid iterative reconstruction (HIR). Thirty-five patients with PDAC who underwent multiphasic CT were retrospectively evaluated. Raw data were reconstructed with two energy levels (40 keV and 70 keV) of virtual monochromatic imaging (VMI) using HIR (ASiR-V50%) and DLIR (TrueFidelity-H). Contrast-to-noise ratio (CNR<sub>tumor</sub>) was calculated from the CT values within regions of interest in tumor and normal pancreas in the pancreatic parenchymal phase images. Lesion conspicuity of PDAC in pancreatic parenchymal phase on 40-keV HIR, 40-keV DLIR, and 70-keV DLIR images was qualitatively rated on a 5-point scale, using 70-keV HIR images as reference (score 1 = poor; score 3 = equivalent to reference; score 5 = excellent) by two radiologists. CNR<sub>tumor</sub> of 40-keV DLIR images (median 10.4, interquartile range (IQR) 7.8-14.9) was significantly higher than that of the other VMIs (40 keV HIR, median 6.2, IQR 4.4-8.5, P < 0.0001; 70-keV DLIR, median 6.3, IQR 5.1-9.9, P = 0.0002; 70-keV HIR, median 4.2, IQR 3.1-6.1, P < 0.0001). CNR<sub>tumor</sub> of 40-keV DLIR images were significantly better than those of the 40-keV HIR and 70-keV HIR images by 72 ± 22% and 211 ± 340%, respectively. Lesion conspicuity scores on 40-keV DLIR images (observer 1, 4.5 ± 0.7; observer 2, 3.4 ± 0.5) were significantly higher than on 40-keV HIR (observer 1, 3.3 ± 0.9, P < 0.0001; observer 2, 3.1 ± 0.4, P = 0.013). DLIR is a promising reconstruction method to improve PDAC delineation in 40-keV VMI at the pancreatic parenchymal phase compared to conventional HIR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1236-1244"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction and Validation of a General Medical Image Dataset for Pretraining. 构建和验证用于预培训的普通医学图像数据集
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-15 DOI: 10.1007/s10278-024-01226-3
Rongguo Zhang, Chenhao Pei, Ji Shi, Shaokang Wang
{"title":"Construction and Validation of a General Medical Image Dataset for Pretraining.","authors":"Rongguo Zhang, Chenhao Pei, Ji Shi, Shaokang Wang","doi":"10.1007/s10278-024-01226-3","DOIUrl":"10.1007/s10278-024-01226-3","url":null,"abstract":"<p><p>In the field of deep learning for medical image analysis, training models from scratch are often used and sometimes, transfer learning from pretrained parameters on ImageNet models is also adopted. However, there is no universally accepted medical image dataset specifically designed for pretraining models currently. The purpose of this study is to construct such a general dataset and validate its effectiveness on downstream medical imaging tasks, including classification and segmentation. In this work, we first build a medical image dataset by collecting several public medical image datasets (CPMID). And then, some pretrained models used for transfer learning are obtained based on CPMID. Various-complexity Resnet and the Vision Transformer network are used as the backbone architectures. In the tasks of classification and segmentation on three other datasets, we compared the experimental results of training from scratch, from the pretrained parameters on ImageNet, and from the pretrained parameters on CPMID. Accuracy, the area under the receiver operating characteristic curve, and class activation map are used as metrics for classification performance. Intersection over Union as the metric is for segmentation evaluation. Utilizing the pretrained parameters on the constructed dataset CPMID, we achieved the best classification accuracy, weighted accuracy, and ROC-AUC values on three validation datasets. Notably, the average classification accuracy outperformed ImageNet-based results by 4.30%, 8.86%, and 3.85% respectively. Furthermore, we achieved the optimal balanced outcome of performance and efficiency in both classification and segmentation tasks. The pretrained parameters on the proposed dataset CPMID are very effective for common tasks in medical image analysis such as classification and segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1051-1061"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950592/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Prediction of Post-treatment Survival in Hepatocellular Carcinoma Patients Using Pre-treatment CT Images and Clinical Data. 利用治疗前 CT 图像和临床数据,基于深度学习预测肝细胞癌患者的治疗后生存期。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-15 DOI: 10.1007/s10278-024-01227-2
Kyung Hwa Lee, Jungwook Lee, Gwang Hyeon Choi, Jihye Yun, Jiseon Kang, Jonggi Choi, Kang Mo Kim, Namkug Kim
{"title":"Deep Learning-Based Prediction of Post-treatment Survival in Hepatocellular Carcinoma Patients Using Pre-treatment CT Images and Clinical Data.","authors":"Kyung Hwa Lee, Jungwook Lee, Gwang Hyeon Choi, Jihye Yun, Jiseon Kang, Jonggi Choi, Kang Mo Kim, Namkug Kim","doi":"10.1007/s10278-024-01227-2","DOIUrl":"10.1007/s10278-024-01227-2","url":null,"abstract":"<p><p>The objective of this study was to develop and evaluate a model for predicting post-treatment survival in hepatocellular carcinoma (HCC) patients using their CT images and clinical information, including various treatment information. We collected pre-treatment contrast-enhanced CT images and clinical information including patient-related factors, initial treatment options, and survival status from 692 patients. The patient cohort was divided into a training cohort (n = 507), a testing cohort (n = 146), and an external CT cohort (n = 39), which included patients who underwent CT scans at other institutions. After model training using fivefold cross-validation, model validation was performed on both the testing cohort and the external CT cohort. Our cascaded model employed a 3D convolutional neural network (CNN) to extract features from CT images and derive final survival probabilities. These probabilities were obtained by concatenating previously predicted probabilities for each interval with the patient-related factors and treatment options. We utilized two consecutive fully connected layers for this process, resulting in a number of final outputs corresponding to the number of time intervals, with values representing conditional survival probabilities for each interval. Performance was assessed using the concordance index (C-index), the mean cumulative/dynamic area under the receiver operating characteristics curve (mC/D AUC), and the mean Brier score (mBS), calculated every 3 months. Through an ablation study, we found that using DenseNet-121 as the backbone network and setting the prediction interval to 6 months optimized the model's performance. The integration of multimodal data resulted in superior predictive capabilities compared to models using only CT images or clinical information (C index 0.824 [95% CI 0.822-0.826], mC/D AUC 0.893 [95% CI 0.891-0.895], and mBS 0.121 [95% CI 0.120-0.123] for internal test cohort; C index 0.750 [95% CI 0.747-0.753], mC/D AUC 0.819 [95% CI 0.816-0.823], and mBS 0.159 [95% CI 0.158-0.161] for external CT cohort, respectively). Our CNN-based discrete-time survival prediction model with CT images and clinical information demonstrated promising results in predicting post-treatment survival of patients with HCC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1212-1223"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950573/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports. 从非结构化放射学报告中提取进行性骨转移的微调大语言模型
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-26 DOI: 10.1007/s10278-024-01242-3
Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe
{"title":"The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports.","authors":"Noriko Kanemaru, Koichiro Yasaka, Nana Fujita, Jun Kanzawa, Osamu Abe","doi":"10.1007/s10278-024-01242-3","DOIUrl":"10.1007/s10278-024-01242-3","url":null,"abstract":"<p><p>Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with \"metastasis\" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"865-872"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning. 利用深度多任务学习改进腕骨X光片的自动质量控制。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-26 DOI: 10.1007/s10278-024-01220-9
Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi
{"title":"Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning.","authors":"Guy Hembroff, Chad Klochko, Joseph Craig, Harikrishnan Changarnkothapeecherikkal, Richard Q Loi","doi":"10.1007/s10278-024-01220-9","DOIUrl":"10.1007/s10278-024-01220-9","url":null,"abstract":"<p><p>Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"838-849"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images. 基于计算机断层扫描图像的肝细胞癌和转移瘤自动诊断。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-09-03 DOI: 10.1007/s10278-024-01192-w
Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin
{"title":"Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images.","authors":"Vincent-Béni Sèna Zossou, Freddy Houéhanou Rodrigue Gnangnon, Olivier Biaou, Florent de Vathaire, Rodrigue S Allodji, Eugène C Ezin","doi":"10.1007/s10278-024-01192-w","DOIUrl":"10.1007/s10278-024-01192-w","url":null,"abstract":"<p><p>Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of <math><mrow><mn>87.65</mn> <mo>%</mo></mrow> </math> in segmentation and a mean accuracy of <math><mrow><mn>93.97</mn> <mo>%</mo></mrow> </math> in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"873-886"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950545/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformers in Medical Imaging: a Comprehensive Review of Advancements and Applications Across Multiple Diseases.
Journal of imaging informatics in medicine Pub Date : 2025-03-31 DOI: 10.1007/s10278-025-01481-y
Sanad Aburass, Osama Dorgham, Jamil Al Shaqsi, Maha Abu Rumman, Omar Al-Kadi
{"title":"Vision Transformers in Medical Imaging: a Comprehensive Review of Advancements and Applications Across Multiple Diseases.","authors":"Sanad Aburass, Osama Dorgham, Jamil Al Shaqsi, Maha Abu Rumman, Omar Al-Kadi","doi":"10.1007/s10278-025-01481-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01481-y","url":null,"abstract":"<p><p>The rapid advancement of artificial intelligence techniques, particularly deep learning, has transformed medical imaging. This paper presents a comprehensive review of recent research that leverage vision transformer (ViT) models for medical image classification across various disciplines. The medical fields of focus include breast cancer, skin lesions, magnetic resonance imaging brain tumors, lung diseases, retinal and eye analysis, COVID-19, heart diseases, colon cancer, brain disorders, diabetic retinopathy, skin diseases, kidney diseases, lymph node diseases, and bone analysis. Each work is critically analyzed and interpreted with respect to its performance, data preprocessing methodologies, model architecture, transfer learning techniques, model interpretability, and identified challenges. Our findings suggest that ViT shows promising results in the medical imaging domain, often outperforming traditional convolutional neural networks (CNN). A comprehensive overview is presented in the form of figures and tables summarizing the key findings from each field. This paper provides critical insights into the current state of medical image classification using ViT and highlights potential future directions for this rapidly evolving research area.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Assisted Diagnosis of Placenta Accreta Spectrum Using the DenseNet-121 Model: A Multicenter, Retrospective Study.
Journal of imaging informatics in medicine Pub Date : 2025-03-24 DOI: 10.1007/s10278-025-01475-w
Yurui Hu, Tianyu Liu, Shutong Pang, Xiao Ling, Zhanqiu Wang, Wenfei Li
{"title":"Deep Learning-Assisted Diagnosis of Placenta Accreta Spectrum Using the DenseNet-121 Model: A Multicenter, Retrospective Study.","authors":"Yurui Hu, Tianyu Liu, Shutong Pang, Xiao Ling, Zhanqiu Wang, Wenfei Li","doi":"10.1007/s10278-025-01475-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01475-w","url":null,"abstract":"<p><p>To explore the diagnostic value of deep learning (DL) imaging based on MRI in predicting placenta accreta spectrum (PAS) in high-risk pregnant women. A total of 263 patients with suspected placenta accreta from Institution I and Institution II were retrospectively analyzed and divided into training (n = 170) and external verification sets (n = 93). Through imaging acquisition, feature extraction, and radiomic data processing, 15 radiomic features were used to train support vector machine (SVM), K-nearest neighbor (KNN), random forest (RF), light gradient boosting machine (LGBM), and DL models to predict PAS. The diagnostic performances of the models were evaluated in the training set using the area under the curve (AUC) and accuracy and further validated in the external verification set. Univariate and multivariate logistic regression analysis revealed that a history of cesarean section, placental thickness, and placenta previa were independent clinical risk factors for predicting PAS. Among machine learning (ML) models, SVM demonstrated the highest diagnostic power (AUC = 0.944), with an accuracy of 0.876. The diagnostic efficiency of the DL model was significantly better than that of other models, with an AUC of 0.956 (95% CI 0.931-0.981) in the training set and 0.863 (95% CI 0.816-0.910) in the external verification set. In terms of specificity, the DL model outperformed the ML models. The DL model based on MRI may demonstrate better performance in the diagnosis of PAS compared to traditional clinical models or ML radiomics models, as further confirmed by the external verification set.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信