Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Deep Learning-Based Model for Non-invasive Hemoglobin Estimation via Body Parts Images: A Retrospective Analysis and a Prospective Emergency Department Study. 基于深度学习的人体部位图像无创血红蛋白估算模型:回顾性分析和前瞻性急诊科研究。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-19 DOI: 10.1007/s10278-024-01209-4
En-Ting Lin, Shao-Chi Lu, An-Sheng Liu, Chia-Hsin Ko, Chien-Hua Huang, Chu-Lin Tsai, Li-Chen Fu
{"title":"Deep Learning-Based Model for Non-invasive Hemoglobin Estimation via Body Parts Images: A Retrospective Analysis and a Prospective Emergency Department Study.","authors":"En-Ting Lin, Shao-Chi Lu, An-Sheng Liu, Chia-Hsin Ko, Chien-Hua Huang, Chu-Lin Tsai, Li-Chen Fu","doi":"10.1007/s10278-024-01209-4","DOIUrl":"10.1007/s10278-024-01209-4","url":null,"abstract":"<p><p>Anemia is a significant global health issue, affecting over a billion people worldwide, according to the World Health Organization. Generally, the gold standard for diagnosing anemia relies on laboratory measurements of hemoglobin. To meet the need in clinical practice, physicians often rely on visual examination of specific areas, such as conjunctiva, to assess pallor. However, this method is subjective and relies on the physician's experience. Therefore, we proposed a deep learning prediction model based on three input images from different body parts, namely, conjunctiva, palm, and fingernail. By incorporating additional body part labels and employing a fusion attention mechanism, the model learns and enhances the salient features of each body part during training, enabling it to produce reliable results. Additionally, we employ a dual loss function that allows the regression model to benefit from well-established classification methods, thereby achieving stable handling of minority samples. We used a retrospective data set (EYES-DEFY-ANEMIA) to develop this model called Body-Part-Anemia Network (BPANet). The BPANet showed excellent performance in detecting anemia, with accuracy of 0.849 and an F1-score of 0.828. Our multi-body-part model has been validated on a prospectively collected data set of 101 patients in National Taiwan University Hospital. The prediction accuracy as well as F1-score can achieve as high as 0.716 and 0.788, respectively. To sum up, we have developed and validated a novel non-invasive hemoglobin prediction model based on image input from multiple body parts, with the potential of real-time use at home and in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"775-792"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Wiener Filter Based on Improved BB Gradient Descent in Iris Image Restoration. 基于改进 BB 梯度下降的维纳滤波器在虹膜图像修复中的应用
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-09-04 DOI: 10.1007/s10278-024-01238-z
Chuandong Qin, Yiqing Zhang
{"title":"Application of Wiener Filter Based on Improved BB Gradient Descent in Iris Image Restoration.","authors":"Chuandong Qin, Yiqing Zhang","doi":"10.1007/s10278-024-01238-z","DOIUrl":"10.1007/s10278-024-01238-z","url":null,"abstract":"<p><p>Iris recognition, renowned for its exceptional precision, has been extensively utilized across diverse industries. However, the presence of noise and blur frequently compromises the quality of iris images, thereby adversely affecting recognition accuracy. In this research, we have refined the traditional Wiener filter image restoration technique by integrating it with a gradient descent strategy, specifically employing the Barzilai-Borwein (BB) step size selection. This innovative approach is designed to enhance both the precision and resilience of iris recognition systems. The BB gradient method is adept at optimizing the parameters of the Wiener filter by introducing simulated blurring and noise conditions to the iris images. Through this process, it is capable of restoring images that have been degraded by blur and noise, leading to a significant improvement in the clarity of the restored images and, consequently, a notable elevation in recognition performance. The results of our experiments have demonstrated that this advanced method surpasses conventional filtering techniques in terms of both subjective visual quality assessments and objective peak signal-to-noise ratio (PSNR) evaluations.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1165-1183"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950590/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Industry Perceptions Survey on AI Adoption and Return on Investment. 2023 年人工智能应用和投资回报行业认知调查。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-20 DOI: 10.1007/s10278-024-01147-1
Mitchell Goldburgh, Michael LaChance, Julia Komissarchik, Julia Patriarche, Joe Chapa, Oliver Chen, Priya Deshpande, Matthew Geeslin, Nina Kottler, Jennifer Sommer, Marcus Ayers, Vedrana Vujic
{"title":"2023 Industry Perceptions Survey on AI Adoption and Return on Investment.","authors":"Mitchell Goldburgh, Michael LaChance, Julia Komissarchik, Julia Patriarche, Joe Chapa, Oliver Chen, Priya Deshpande, Matthew Geeslin, Nina Kottler, Jennifer Sommer, Marcus Ayers, Vedrana Vujic","doi":"10.1007/s10278-024-01147-1","DOIUrl":"10.1007/s10278-024-01147-1","url":null,"abstract":"<p><p>This SIIM-sponsored 2023 report highlights an industry view on artificial intelligence adoption barriers and success related to diagnostic imaging, life sciences, and contrasts. In general, our 2023 survey indicates that there has been progress in adopting AI across multiple uses, and there continues to be an optimistic forecast for the impact on workflow and clinical outcomes. This report, as in prior years, should be seen as a snapshot of the use of AI in imaging. Compared to our 2021 survey, the 2023 respondents expressed wider AI adoption but felt this was behind the potential. Specifically, the adoption has increased as sources of return on investment with AI in radiology are better understood as documented by vendor/client use case studies. Generally, the discussions of AI solutions centered on workflow triage, visualization, detection, and characterization. Generative AI was also mentioned for improving productivity in reporting. As payor reimbursement remains elusive, the ROI discussions expanded to look at other factors, including increased hospital procedures and admissions, enhanced radiologist productivity for practices, and improved patient outcomes for integrated health networks. When looking at the longer-term horizon for AI adoption, respondents frequently mentioned that the opportunity for AI to achieve greater adoption with more complex AI and a more manageable/visible ROI is outside the USA. Respondents focused on the barriers to trust in AI and the FDA processes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"663-670"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confidence-Aware Severity Assessment of Lung Disease from Chest X-Rays Using Deep Neural Network on a Multi-Reader Dataset. 在多读取器数据集上使用深度神经网络对胸部 X 光片进行可信度感知的肺病严重程度评估
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-20 DOI: 10.1007/s10278-024-01151-5
Mohammadreza Zandehshahvar, Marly van Assen, Eun Kim, Yashar Kiarashi, Vikranth Keerthipati, Giovanni Tessarin, Emanuele Muscogiuri, Arthur E Stillman, Peter Filev, Amir H Davarpanah, Eugene A Berkowitz, Stefan Tigges, Scott J Lee, Brianna L Vey, Carlo De Cecco, Ali Adibi
{"title":"Confidence-Aware Severity Assessment of Lung Disease from Chest X-Rays Using Deep Neural Network on a Multi-Reader Dataset.","authors":"Mohammadreza Zandehshahvar, Marly van Assen, Eun Kim, Yashar Kiarashi, Vikranth Keerthipati, Giovanni Tessarin, Emanuele Muscogiuri, Arthur E Stillman, Peter Filev, Amir H Davarpanah, Eugene A Berkowitz, Stefan Tigges, Scott J Lee, Brianna L Vey, Carlo De Cecco, Ali Adibi","doi":"10.1007/s10278-024-01151-5","DOIUrl":"10.1007/s10278-024-01151-5","url":null,"abstract":"<p><p>In this study, we present a method based on Monte Carlo Dropout (MCD) as Bayesian neural network (BNN) approximation for confidence-aware severity classification of lung diseases in COVID-19 patients using chest X-rays (CXRs). Trained and tested on 1208 CXRs from Hospital 1 in the USA, the model categorizes severity into four levels (i.e., normal, mild, moderate, and severe) based on lung consolidation and opacity. Severity labels, determined by the median consensus of five radiologists, serve as the reference standard. The model's performance is internally validated against evaluations from an additional radiologist and two residents that were excluded from the median. The performance of the model is further evaluated on additional internal and external datasets comprising 2200 CXRs from the same hospital and 1300 CXRs from Hospital 2 in South Korea. The model achieves an average area under the curve (AUC) of 0.94 ± 0.01 across all classes in the primary dataset, surpassing human readers in each severity class and achieves a higher Kendall correlation coefficient (KCC) of 0.80 ± 0.03. The performance of the model is consistent across varied datasets, highlighting its generalization. A key aspect of the model is its predictive uncertainty (PU), which is inversely related to the level of agreement among radiologists, particularly in mild and moderate cases. The study concludes that the model outperforms human readers in severity assessment and maintains consistent accuracy across diverse datasets. Its ability to provide confidence measures in predictions is pivotal for potential clinical use, underscoring the BNN's role in enhancing diagnostic precision in lung disease analysis through CXR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"793-803"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Tree Complex Wavelet Pooling and Attention-Based Modified U-Net Architecture for Automated Breast Thermogram Segmentation and Classification. 用于自动乳腺热图分割和分类的双树复合小波池化和基于注意力的修正 U-Net 架构
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-09-03 DOI: 10.1007/s10278-024-01239-y
Lalit Garia, Hariharan Muthusamy
{"title":"Dual-Tree Complex Wavelet Pooling and Attention-Based Modified U-Net Architecture for Automated Breast Thermogram Segmentation and Classification.","authors":"Lalit Garia, Hariharan Muthusamy","doi":"10.1007/s10278-024-01239-y","DOIUrl":"10.1007/s10278-024-01239-y","url":null,"abstract":"<p><p>Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"887-901"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model. 交互式多尺度融合:通过跨 IMSM 模型推进脑肿瘤检测。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-15 DOI: 10.1007/s10278-024-01222-7
Vasanthi Durairaj, Palani Uthirapathy
{"title":"Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model.","authors":"Vasanthi Durairaj, Palani Uthirapathy","doi":"10.1007/s10278-024-01222-7","DOIUrl":"10.1007/s10278-024-01222-7","url":null,"abstract":"<p><p>Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"757-774"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep-Learning-Enabled Electrocardiogram and Chest X-Ray for Detecting Pulmonary Arterial Hypertension. 用于检测肺动脉高压的深度学习心电图和胸部 X 光片。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-13 DOI: 10.1007/s10278-024-01225-4
Pang-Yen Liu, Shi-Chue Hsing, Dung-Jang Tsai, Chin Lin, Chin-Sheng Lin, Chih-Hung Wang, Wen-Hui Fang
{"title":"A Deep-Learning-Enabled Electrocardiogram and Chest X-Ray for Detecting Pulmonary Arterial Hypertension.","authors":"Pang-Yen Liu, Shi-Chue Hsing, Dung-Jang Tsai, Chin Lin, Chin-Sheng Lin, Chih-Hung Wang, Wen-Hui Fang","doi":"10.1007/s10278-024-01225-4","DOIUrl":"10.1007/s10278-024-01225-4","url":null,"abstract":"<p><p>The diagnosis and treatment of pulmonary hypertension have changed dramatically through the re-defined diagnostic criteria and advanced drug development in the past decade. The application of Artificial Intelligence for the detection of elevated pulmonary arterial pressure (ePAP) was reported recently. Artificial Intelligence (AI) has demonstrated the capability to identify ePAP and its association with hospitalization due to heart failure when analyzing chest X-rays (CXR). An AI model based on electrocardiograms (ECG) has shown promise in not only detecting ePAP but also in predicting future risks related to cardiovascular mortality. We aimed to develop an AI model integrating ECG and CXR to detect ePAP and evaluate their performance. We developed a deep-learning model (DLM) using paired ECG and CXR to detect ePAP (systolic pulmonary artery pressure > 50 mmHg in transthoracic echocardiography). This model was further validated in a community hospital. Additionally, our DLM was evaluated for its ability to predict future occurrences of left ventricular dysfunction (LVD, ejection fraction < 35%) and cardiovascular mortality. The AUCs for detecting ePAP were as follows: 0.8261 with ECG (sensitivity 76.6%, specificity 74.5%), 0.8525 with CXR (sensitivity 82.8%, specificity 72.7%), and 0.8644 with a combination of both (sensitivity 78.6%, specificity 79.2%) in the internal dataset. In the external validation dataset, the AUCs for ePAP detection were 0.8348 with ECG, 0.8605 with CXR, and 0.8734 with the combination. Furthermore, using the combination of ECGs and CXR, the negative predictive value (NPV) was 98% in the internal dataset and 98.1% in the external dataset. Patients with ePAP detected by the DLM using combination had a higher risk of new-onset LVD with a hazard ratio (HR) of 4.51 (95% CI: 3.54-5.76) in the internal dataset and cardiovascular mortality with a HR of 6.08 (95% CI: 4.66-7.95). Similar results were seen in the external validation dataset. The DLM, integrating ECG and CXR, effectively detected ePAP with a strong NPV and forecasted future risks of developing LVD and cardiovascular mortality. This model has the potential to expedite the early identification of pulmonary hypertension in patients, prompting further evaluation through echocardiography and, when necessary, right heart catheterization (RHC), potentially resulting in enhanced cardiovascular outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"747-756"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Revisions to Insights: Converting Radiology Report Revisions into Actionable Educational Feedback Using Generative AI Models. 从修订到洞察:利用生成式人工智能模型将放射学报告修订版转化为可操作的教育反馈。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-19 DOI: 10.1007/s10278-024-01233-4
Shawn Lyo, Suyash Mohan, Alvand Hassankhani, Abass Noor, Farouk Dako, Tessa Cook
{"title":"From Revisions to Insights: Converting Radiology Report Revisions into Actionable Educational Feedback Using Generative AI Models.","authors":"Shawn Lyo, Suyash Mohan, Alvand Hassankhani, Abass Noor, Farouk Dako, Tessa Cook","doi":"10.1007/s10278-024-01233-4","DOIUrl":"10.1007/s10278-024-01233-4","url":null,"abstract":"<p><p>Expert feedback on trainees' preliminary reports is crucial for radiologic training, but real-time feedback can be challenging due to non-contemporaneous, remote reading and increasing imaging volumes. Trainee report revisions contain valuable educational feedback, but synthesizing data from raw revisions is challenging. Generative AI models can potentially analyze these revisions and provide structured, actionable feedback. This study used the OpenAI GPT-4 Turbo API to analyze paired synthesized and open-source analogs of preliminary and finalized reports, identify discrepancies, categorize their severity and type, and suggest review topics. Expert radiologists reviewed the output by grading discrepancies, evaluating the severity and category accuracy, and suggested review topic relevance. The reproducibility of discrepancy detection and maximal discrepancy severity was also examined. The model exhibited high sensitivity, detecting significantly more discrepancies than radiologists (W = 19.0, p < 0.001) with a strong positive correlation (r = 0.778, p < 0.001). Interrater reliability for severity and type were fair (Fleiss' kappa = 0.346 and 0.340, respectively; weighted kappa = 0.622 for severity). The LLM achieved a weighted F1 score of 0.66 for severity and 0.64 for type. Generated teaching points were considered relevant in ~ 85% of cases, and relevance correlated with the maximal discrepancy severity (Spearman ρ = 0.76, p < 0.001). The reproducibility was moderate to good (ICC (2,1) = 0.690) for the number of discrepancies and substantial for maximal discrepancy severity (Fleiss' kappa = 0.718; weighted kappa = 0.94). Generative AI models can effectively identify discrepancies in report revisions and generate relevant educational feedback, offering promise for enhancing radiology training.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1265-1279"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Usefulness of Low-Kiloelectron Volt Virtual Monochromatic Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction Technique in Improving the Delineation of Pancreatic Ductal Adenocarcinoma. 低千兆电子伏特虚拟单色对比度增强计算机断层扫描与深度学习图像重建技术在改善胰腺导管腺癌分界中的应用。
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-13 DOI: 10.1007/s10278-024-01214-7
Yasutaka Ichikawa, Yoshinori Kanii, Akio Yamazaki, Mai Kobayashi, Kensuke Domae, Motonori Nagata, Hajime Sakuma
{"title":"The Usefulness of Low-Kiloelectron Volt Virtual Monochromatic Contrast-Enhanced Computed Tomography with Deep Learning Image Reconstruction Technique in Improving the Delineation of Pancreatic Ductal Adenocarcinoma.","authors":"Yasutaka Ichikawa, Yoshinori Kanii, Akio Yamazaki, Mai Kobayashi, Kensuke Domae, Motonori Nagata, Hajime Sakuma","doi":"10.1007/s10278-024-01214-7","DOIUrl":"10.1007/s10278-024-01214-7","url":null,"abstract":"<p><p>To evaluate the usefulness of low-keV multiphasic computed tomography (CT) with deep learning image reconstruction (DLIR) in improving the delineation of pancreatic ductal adenocarcinoma (PDAC) compared to conventional hybrid iterative reconstruction (HIR). Thirty-five patients with PDAC who underwent multiphasic CT were retrospectively evaluated. Raw data were reconstructed with two energy levels (40 keV and 70 keV) of virtual monochromatic imaging (VMI) using HIR (ASiR-V50%) and DLIR (TrueFidelity-H). Contrast-to-noise ratio (CNR<sub>tumor</sub>) was calculated from the CT values within regions of interest in tumor and normal pancreas in the pancreatic parenchymal phase images. Lesion conspicuity of PDAC in pancreatic parenchymal phase on 40-keV HIR, 40-keV DLIR, and 70-keV DLIR images was qualitatively rated on a 5-point scale, using 70-keV HIR images as reference (score 1 = poor; score 3 = equivalent to reference; score 5 = excellent) by two radiologists. CNR<sub>tumor</sub> of 40-keV DLIR images (median 10.4, interquartile range (IQR) 7.8-14.9) was significantly higher than that of the other VMIs (40 keV HIR, median 6.2, IQR 4.4-8.5, P < 0.0001; 70-keV DLIR, median 6.3, IQR 5.1-9.9, P = 0.0002; 70-keV HIR, median 4.2, IQR 3.1-6.1, P < 0.0001). CNR<sub>tumor</sub> of 40-keV DLIR images were significantly better than those of the 40-keV HIR and 70-keV HIR images by 72 ± 22% and 211 ± 340%, respectively. Lesion conspicuity scores on 40-keV DLIR images (observer 1, 4.5 ± 0.7; observer 2, 3.4 ± 0.5) were significantly higher than on 40-keV HIR (observer 1, 3.3 ± 0.9, P < 0.0001; observer 2, 3.1 ± 0.4, P = 0.013). DLIR is a promising reconstruction method to improve PDAC delineation in 40-keV VMI at the pancreatic parenchymal phase compared to conventional HIR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1236-1244"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction and Validation of a General Medical Image Dataset for Pretraining. 构建和验证用于预培训的普通医学图像数据集
Journal of imaging informatics in medicine Pub Date : 2025-04-01 Epub Date: 2024-08-15 DOI: 10.1007/s10278-024-01226-3
Rongguo Zhang, Chenhao Pei, Ji Shi, Shaokang Wang
{"title":"Construction and Validation of a General Medical Image Dataset for Pretraining.","authors":"Rongguo Zhang, Chenhao Pei, Ji Shi, Shaokang Wang","doi":"10.1007/s10278-024-01226-3","DOIUrl":"10.1007/s10278-024-01226-3","url":null,"abstract":"<p><p>In the field of deep learning for medical image analysis, training models from scratch are often used and sometimes, transfer learning from pretrained parameters on ImageNet models is also adopted. However, there is no universally accepted medical image dataset specifically designed for pretraining models currently. The purpose of this study is to construct such a general dataset and validate its effectiveness on downstream medical imaging tasks, including classification and segmentation. In this work, we first build a medical image dataset by collecting several public medical image datasets (CPMID). And then, some pretrained models used for transfer learning are obtained based on CPMID. Various-complexity Resnet and the Vision Transformer network are used as the backbone architectures. In the tasks of classification and segmentation on three other datasets, we compared the experimental results of training from scratch, from the pretrained parameters on ImageNet, and from the pretrained parameters on CPMID. Accuracy, the area under the receiver operating characteristic curve, and class activation map are used as metrics for classification performance. Intersection over Union as the metric is for segmentation evaluation. Utilizing the pretrained parameters on the constructed dataset CPMID, we achieved the best classification accuracy, weighted accuracy, and ROC-AUC values on three validation datasets. Notably, the average classification accuracy outperformed ImageNet-based results by 4.30%, 8.86%, and 3.85% respectively. Furthermore, we achieved the optimal balanced outcome of performance and efficiency in both classification and segmentation tasks. The pretrained parameters on the proposed dataset CPMID are very effective for common tasks in medical image analysis such as classification and segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1051-1061"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950592/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信