Aiste Matuleviciute-Stojanoska , Julia Sautier , Verena Bauer , Martin Nuessel , Volha Nizhnikava , Christian Stumpf , Thorsten Klink
{"title":"Coronary CT angiography: First comparison of model-based and hybrid iterative reconstruction with the reference standard invasive catheter angiography for CAD-RADS reporting","authors":"Aiste Matuleviciute-Stojanoska , Julia Sautier , Verena Bauer , Martin Nuessel , Volha Nizhnikava , Christian Stumpf , Thorsten Klink","doi":"10.1016/j.ejro.2024.100612","DOIUrl":"10.1016/j.ejro.2024.100612","url":null,"abstract":"<div><h3>Background</h3><div>The purpose of this study was to compare CCTA images generated using HIR and IMR algorithm with the reference standard ICA, and to determine to what extend further improvements of IMR over HIR can be expected.</div></div><div><h3>Methods</h3><div>This retrospective study included 60 patients with low to intermediate CAD risk, who underwent coronary CTA (with HIR and IMR) and ICA. ICA was used as reference standard. Two independent and blinded readers evaluated 2226 segments, classifying stenosis with CAD-RADS (significant stenosis ≥3). Image quality was assessed with a 5-point scale, SNR in the ascending aorta, and FWHM of proximal LCA calibers. The impact of image noise, radiation dose, and BMI on diagnostic accuracy was evaluated using ROC curves and Fisher’s Exact Test. Quantitative plaque analysis was performed on 28 plaques.</div></div><div><h3>Results</h3><div>IMR showed higher image quality than HIR (IMR 4.4, HIR 3.97, p<0.001) with better SNR (21.4 vs. 13.28, p<0.001) and FWHM (4.44 vs. 4.55, p=0.003). IMR had better diagnostic accuracy (ROC AUC 0.967 vs. 0.948, p=0.16, performed better at higher radiation doses (p=0.02) and showed a larger minimum lumen area (p=0.022 and p=0.046).</div></div><div><h3>Conclusion</h3><div>IMR offers significantly superior image quality of CCTA, more precise measurements, and a stronger positive correlation with ICA. The overall diagnostic accuracy may be superior with IMR, although the differences were not statistically significant. However, in patients who are exposed to higher radiation doses during CCTA due to their constitution, IMR enables significantly better diagnostic accuracy than HIR thus providing a specific benefit for obese patients.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100612"},"PeriodicalIF":1.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hidemi Okuma , Amro Masarwah , Aleksandr Istomin , Aki Nykänen , Juhana Hakumäki , Ritva Vanninen , Mazen Sudah
{"title":"Increased background parenchymal enhancement on peri-menopausal breast magnetic resonance imaging","authors":"Hidemi Okuma , Amro Masarwah , Aleksandr Istomin , Aki Nykänen , Juhana Hakumäki , Ritva Vanninen , Mazen Sudah","doi":"10.1016/j.ejro.2024.100611","DOIUrl":"10.1016/j.ejro.2024.100611","url":null,"abstract":"<div><h3>Objectives</h3><div>To examine the background parenchymal enhancement (BPE) levels in peri-menopausal breast MRI compared with pre- and post-menopausal breast MRI.</div></div><div><h3>Methods</h3><div>This study included 562 patients (55.8±12.3 years) who underwent contrast-enhanced dynamic breast MRI between 2011 and 2015 for clinical indications. We evaluated the BPE level, amount of fibroglandular tissue (FGT), and social and clinical variables. The inter-reader agreement for the amount of FGT and the BPE level was evaluated using interclass correlation coefficients. Associations between the BPE level and body mass index (BMI), ages of menarche and menopause, childbirth history, number of children, and the amount of FGT were determined using Spearman’s correlation coefficients or Mann-Whitney <em>U</em>-test. Pearson’s χ<sup>2</sup> test was used to assess the difference in the frequency of BPE categories among the age-groups.</div></div><div><h3>Results</h3><div>The inter-reader agreement was 0.864 for the amount of FGT and 0.840 for the BPE level, both indicating almost perfect agreement. The BPE level showed a weak positive correlation with the amount of FGT (Spearman’s ρ=0.271, <em>P</em><0.001). BPE was not significantly correlated with BMI, childbirth history, number of births, or ages of menarche or menopause. BPE was greater in the peri-menopausal age-group compared with the corresponding pre- and post-menopausal age-groups, both with benign and malignant lesions.</div></div><div><h3>Conclusions</h3><div>BPE was greater in the peri-menopausal stage than in the pre- and post-menopausal stages. Our results suggest that BPE showed a non-linear decrease with age and that the hormonal disbalance in the peri-menopausal period has a greater effect on the BPE level than was previously assumed.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100611"},"PeriodicalIF":1.8,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tingting Mu , Xinde Zheng , Danjun Song , Jiejun Chen , Xuewang Yue , Wentao Wang , Shengxiang Rao
{"title":"Deep learning based on multiparametric MRI predicts early recurrence in hepatocellular carcinoma patients with solitary tumors ≤5 cm","authors":"Tingting Mu , Xinde Zheng , Danjun Song , Jiejun Chen , Xuewang Yue , Wentao Wang , Shengxiang Rao","doi":"10.1016/j.ejro.2024.100610","DOIUrl":"10.1016/j.ejro.2024.100610","url":null,"abstract":"<div><h3>Purpose</h3><div>To evaluate the effectiveness of a constructed deep learning model in predicting early recurrence after surgery in hepatocellular carcinoma (HCC) patients with solitary tumors ≤5 cm.</div></div><div><h3>Materials and methods</h3><div>Our study included a total of 331 HCC patients who underwent curative resection, with all patients having preoperative dynamic contrast-enhanced MRI (DCE-MRI). Patients who recurred within two years after surgery were defined as early recurrence. The enrolled patients were randomly divided into the training group and the testing group. A ResNet-based deep learning model with eight conventional neural network branches was built to predict the early recurrence status of these patients. Patient characteristics and laboratory tests were further filtered by regression models and then integrated with deep learning models to improve the prediction performance.</div></div><div><h3>Results</h3><div>Among 331 HCC patients, 70 (21.1 %) experienced early recurrence. In multivariate Cox regression analysis, only tumor size (Hazard ratio (HR=1.394, 95 %CI:1.011–1.920, p value=0.043) and deep learning extracted image features (HR: 38440, 95 %CI:2321–636600, p value<0.001) were significant risk factors for early recurrence. In the training and testing cohort, the AUCs of the image-based deep learning prediction model were 0.839 and 0.833. By integrating tumor size with image-based deep learning model to construct a combined model, we found that the AUCs of the combined model to assess early recurrence in the training and validation cohort were 0.846 and 0.842. We further developed a nomogram to visualize the preoperative combined model, and the prediction performance of nomogram showed a good fitness in the testing cohort.</div></div><div><h3>Conclusions</h3><div>The proposed deep learning-based prediction model using DCE-MRI is useful for assessing early recurrence in HCC patients with single tumors ≤5 cm.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100610"},"PeriodicalIF":1.8,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Na Feng , Shanshan Zhao , Kai Wang , Peizhe Chen , Yunpeng Wang , Yuan Gao , Zhengping Wang , Yidan Lu , Chen Chen , Jincao Yao , Zhikai Lei , Dong Xu
{"title":"Deep learning model for diagnosis of thyroid nodules with size less than 1 cm: A multicenter, retrospective study","authors":"Na Feng , Shanshan Zhao , Kai Wang , Peizhe Chen , Yunpeng Wang , Yuan Gao , Zhengping Wang , Yidan Lu , Chen Chen , Jincao Yao , Zhikai Lei , Dong Xu","doi":"10.1016/j.ejro.2024.100609","DOIUrl":"10.1016/j.ejro.2024.100609","url":null,"abstract":"<div><h3>Objective</h3><div>To develop a ultrasound images based dual-channel deep learning model to achieve accurate early diagnosis of thyroid nodules less than 1 cm.</div></div><div><h3>Methods</h3><div>A dual-channel deep learning model called thyroid nodule transformer network (TNT-Net) was proposed. The model has two input channels for transverse and longitudinal ultrasound images of thyroid nodules, respectively. A total of 9649 nodules from 8455 patients across five hospitals were retrospectively collected. The data were divided into a training set (8453 nodules, 7369 patients), an internal test set (565 nodules, 512 patients), and an external test set (631 nodules, 574 patients).</div></div><div><h3>Results</h3><div>TNT-Net achieved an area under the curve (AUC) of 0.953 (95 % confidence interval (CI): 0.934, 0.969) on the internal test set and 0.941 (95 % CI: 0.921, 0.957) on the external test set, significantly outperforming traditional deep convolutional neural network models and single-channel swin transformer model, whose AUCs ranged from 0.800 (95 % CI: 0.759, 0.837) to 0.856 (95 % CI: 0.819, 0.881). Furthermore, feature heatmap visualization showed that TNT-Net could extract richer and more energetic malignant nodule patterns.</div></div><div><h3>Conclusion</h3><div>The proposed TNT-Net model significantly improved the recognition capability for thyroid nodules with size less than 1 cm. This model has the potential to reduce overdiagnosis and overtreatment of such nodules, providing essential support for precise management of thyroid nodules while complementing fine-needle aspiration biopsy.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100609"},"PeriodicalIF":1.8,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruiting Wang , Lianting Zhong , Pingyi Zhu , Xianpan Pan , Lei Chen , Jianjun Zhou , Yuqin Ding
{"title":"MRI-based radiomics machine learning model to differentiate non-clear cell renal cell carcinoma from benign renal tumors","authors":"Ruiting Wang , Lianting Zhong , Pingyi Zhu , Xianpan Pan , Lei Chen , Jianjun Zhou , Yuqin Ding","doi":"10.1016/j.ejro.2024.100608","DOIUrl":"10.1016/j.ejro.2024.100608","url":null,"abstract":"<div><h3>Purpose</h3><div>We aim to develop an MRI-based radiomics model to improve the accuracy of differentiating non-ccRCC from benign renal tumors preoperatively.</div></div><div><h3>Methods</h3><div>The retrospective study included 195 patients with pathologically confirmed renal tumors (134 non-ccRCCs and 61 benign renal tumors) who underwent preoperative renal mass protocol MRI examinations. The patients were divided into a training set (n = 136) and test set (n = 59). Simple t-test and the Least Absolute Shrink and Selection Operator (LASSO) were used to select the most valuable features and the rad-scores of them were calculated. The clinicoradiologic models, single-sequence radiomics models, multi-sequence radiomics models and combined models for differentiation were constructed with 2 classifiers (support vector machine (SVM), logistic regression (LR)) in the training set and used for differentiation in the test set. Ten-fold cross validation was applied to obtain the optimal hyperparameters of the models. The performances of the models were evaluated by the area under the receiver operating characteristic (ROC) curve (AUC). Delong’s test was performed to compare the performances of models.</div></div><div><h3>Results</h3><div>After univariate and multivariate logistic regression analysis, the independent risk factors to differentiate non-ccRCC from benign renal tumors were selected as follows: age, tumor region, hemorrhage, pseudocapsule and enhancement degree. Among the 14 machine learning classification models constructed, the combined model with LR has the highest efficiency in differentiating non-ccRCC from benign renal tumors. The AUC in the training set is 0.964, and the accuracy is 0.919. The AUC in the test set is 0.936, and the accuracy is 0.864.</div></div><div><h3>Conclusion</h3><div>The MRI-based radiomics machine learning is feasible to differentiate non-ccRCC from benign renal tumors, which could improve the accuracy of clinical diagnosis.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100608"},"PeriodicalIF":1.8,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Post-deployment performance of a deep learning algorithm for normal and abnormal chest X-ray classification: A study at visa screening centers in the United Arab Emirates","authors":"Amina Abdelqadir Mohamed AlJasmi , Hatem Ghonim , Mohyi Eldin Fahmy , Aswathy Nair , Shamie Kumar , Dennis Robert , Afrah Abdikarim Mohamed , Hany Abdou , Anumeha Srivastava , Bhargava Reddy","doi":"10.1016/j.ejro.2024.100606","DOIUrl":"10.1016/j.ejro.2024.100606","url":null,"abstract":"<div><h3>Background</h3><div>Chest radiographs (CXRs) are widely used to screen for infectious diseases like tuberculosis and COVID-19 among migrants. At such high-volume settings, manual CXR reporting is challenging and integrating artificial intelligence (AI) algorithms into the workflow help to rule out normal findings in minutes, allowing radiologists to focus on abnormal cases.</div></div><div><h3>Methods</h3><div>In this post-deployment study, all the CXRs acquired during the visa screening process across 33 centers in United Arab Emirates from January 2021 to June 2022 (18 months) were included. The qXR v2.1 chest X-ray interpretation software was used to classify the scans into normal and abnormal, and its agreement against radiologist was evaluated. Additionally, a digital survey was conducted among 20 healthcare professionals with prior AI experience to understand real-world implementation challenges and impact.</div></div><div><h3>Results</h3><div>The analysis of 1309,443 CXRs from 1309,431 patients (median age: 35 years; IQR [29–42]; 1030,071 males [78.7 %]) in this study revealed a Negative Predictive Value (NPV) of 99.92 % (95 % CI: 99.92, 99.93), Positive Predictive Value (PPV) of 5.06 % (95 % CI: 4.99, 5.13) and overall percent agreement of the AI with radiologists of 72.90 % (95 % CI: 72.82, 72.98). In the survey, majority (88.2 %) of the radiologists agreed to turnaround time reduction after AI integration, while 82 % suggested that the AI improved their diagnostic accuracy.</div></div><div><h3>Discussion</h3><div>In contrast with the existing studies, this research uses a substantially large data. A high NPV and satisfactory agreement with human readers indicate that AI can reliably identify normal CXRs, making it suitable for routine applications.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100606"},"PeriodicalIF":1.8,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjiang Wang , Jiaojiao Li , Zimeng Wang , Yanjun Liu , Fei Yang , Shujun Cui
{"title":"Study on the classification of benign and malignant breast lesions using a multi-sequence breast MRI fusion radiomics and deep learning model","authors":"Wenjiang Wang , Jiaojiao Li , Zimeng Wang , Yanjun Liu , Fei Yang , Shujun Cui","doi":"10.1016/j.ejro.2024.100607","DOIUrl":"10.1016/j.ejro.2024.100607","url":null,"abstract":"<div><h3>Purpose</h3><div>To develop a multi-modal model combining multi-sequence breast MRI fusion radiomics and deep learning for the classification of benign and malignant breast lesions, to assist clinicians in better selecting treatment plans.</div></div><div><h3>Methods</h3><div>A total of 314 patients who underwent breast MRI examinations were included. They were randomly divided into training, validation, and test sets in a ratio of 7:1:2. Subsequently, features of T1-weighted images (T1WI), T2-weighted images (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were extracted using the convolutional neural network ResNet50 for fusion, and then combined with radiomic features from the three sequences. The following models were established: T1 model, T2 model, DCE model, DCE_T1_T2 model, and DCE_T1_T2_rad model. The performance of the models was evaluated by the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value. The differences between the DCE_T1_T2_rad model and the other four models were compared using the Delong test, with a <em>P</em>-value < 0.05 considered statistically significant.</div></div><div><h3>Results</h3><div>The five models established in this study performed well, with AUC values of 0.53 for the T1 model, 0.62 for the T2 model, 0.79 for the DCE model, 0.94 for the DCE_T1_T2 model, and 0.98 for the DCE_T1_T2_rad model. The DCE_T1_T2_rad model showed statistically significant differences (<em>P</em> < 0.05) compared to the other four models.</div></div><div><h3>Conclusion</h3><div>The use of a multi-modal model combining multi-sequence breast MRI fusion radiomics and deep learning can effectively improve the diagnostic performance of breast lesion classification.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100607"},"PeriodicalIF":1.8,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Xiang Tay , Marcus EH Ong , Shane J. Foley , Robert Chun Chen , Lai Peng Chan , Ronan Killeen , May San Mak , Jonathan P. McNulty , Kularatna Sanjeewa
{"title":"True cost estimation of common imaging procedures for cost-effectiveness analysis - insights from a Singapore hospital emergency department","authors":"Yi Xiang Tay , Marcus EH Ong , Shane J. Foley , Robert Chun Chen , Lai Peng Chan , Ronan Killeen , May San Mak , Jonathan P. McNulty , Kularatna Sanjeewa","doi":"10.1016/j.ejro.2024.100605","DOIUrl":"10.1016/j.ejro.2024.100605","url":null,"abstract":"<div><h3>Objectives</h3><div>There is a lack of clear and consistent cost reporting for cost-effectiveness analysis in radiology. Estimates are often obtained using costing derived from hospital charge records. This study aims to evaluate the accuracy of hospital charge records compared to a Singapore hospital's true diagnostic imaging costs.</div></div><div><h3>Methods</h3><div>A seven-step process involving a bottom-up micro-costing approach was devised and followed to calculate the cost of imaging using actual data from a clinical setting. We retrieved electronic data from a random sample of 96 emergency department patients who had CT brain, CT and X-ray cervical spine, and X-ray lumbar spine performed to calculate the parameters required for cost estimation. We adjusted imaging duration and number of performing personnel to account for variations.</div></div><div><h3>Results</h3><div>Our approach determined the average cost for the following imaging procedures: CT brain (€154.00), CT and X-ray cervical spine (€177.14 and €68.22), and X-ray lumbar spine (€79.85). We found that the true cost of both conventional radiography procedures was marginally higher than the subsidized patient charge, and all costs were slightly lower than the private patient charge except for X-ray lumbar spine (€73.49 vs.€79.85). We identified larger differences in cost for both CT procedures and smaller differences in cost for conventional radiography procedures, depending on the patient's private or subsidized status. For private status, the differences were: CT brain (Min: €194.20; Max: €264.40), CT cervical spine (Min: €219.54; Max: €399.05), X-ray cervical spine (Min: €5.27; Max: €61.94), and X-ray lumbar spine (Min: €6.36; Max: €108.04), while for subsidized status, the differences were: CT brain (Min: €7.56; Max: €62.64), CT cervical spine (Min: €47.02; Max: €132.49), X-ray cervical spine (Min: €15.88; Max: €103.44), and X-ray lumbar spine (Min: €13.66; Max: €149.44). Considering examination duration and the number of personnel engaged in a procedure, there were significant variations in the minimum, average, and maximum imaging costs.</div></div><div><h3>Conclusion</h3><div>There is a modest gap between hospital charges and actual costs, and we must therefore exercise caution and recognize the limitations of utilizing hospital charge records as absolute metrics for cost-effectiveness analysis<em>.</em> Our detailed approach can potentially enable more accurate imaging cost determination for future studies.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100605"},"PeriodicalIF":1.8,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chuanjun Xu , Qinmei Xu , Li Liu , Mu Zhou , Zijian Xing , Zhen Zhou , Danyang Ren , Changsheng Zhou , Longjiang Zhang , Xiao Li , Xianghao Zhan , Olivier Gevaert , Guangming Lu
{"title":"A tri-light warning system for hospitalized COVID-19 patients: Credibility-based risk stratification for future pandemic preparedness","authors":"Chuanjun Xu , Qinmei Xu , Li Liu , Mu Zhou , Zijian Xing , Zhen Zhou , Danyang Ren , Changsheng Zhou , Longjiang Zhang , Xiao Li , Xianghao Zhan , Olivier Gevaert , Guangming Lu","doi":"10.1016/j.ejro.2024.100603","DOIUrl":"10.1016/j.ejro.2024.100603","url":null,"abstract":"<div><h3>Purpose</h3><div>The novel coronavirus pneumonia (COVID-19) has continually spread and mutated, requiring a patient risk stratification system to optimize medical resources and improve pandemic response. We aimed to develop a conformal prediction-based tri-light warning system for stratifying COVID-19 patients, applicable to both original and emerging variants.</div></div><div><h3>Methods</h3><div>We retrospectively collected data from 3646 patients across multiple centers in China. The dataset was divided into a training set (n = 1451), a validation set (n = 662), an external test set from Huoshenshan Field Hospital (n = 1263), and a specific test set for Delta and Omicron variants (n = 544). The tri-light warning system extracts radiomic features from CT (computed tomography) and integrates clinical records to classify patients into high-risk (red), uncertain-risk (yellow), and low-risk (green) categories. Models were built to predict ICU (intensive care unit) admissions (adverse cases in training/validation/Huoshenshan/variant test sets: n = 39/21/262/11) and were evaluated using AUROC ((area under the receiver operating characteristic curve)) and AUPRC ((area under the precision-recall curve)) metrics.</div></div><div><h3>Results</h3><div>The dataset included 1830 men (50.2 %) and 1816 women (50.8 %), with a median age of 53.7 years (IQR [interquartile range]: 42–65 years). The system demonstrated strong performance under data distribution shifts, with AUROC of 0.89 and AUPRC of 0.42 for original strains, and AUROC of 0.77–0.85 and AUPRC of 0.51–0.60 for variants.</div></div><div><h3>Conclusion</h3><div>The tri-light warning system can enhance pandemic responses by effectively stratifying COVID-19 patients under varying conditions and data shifts.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100603"},"PeriodicalIF":1.8,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Francischello , Salvatore Claudio Fanni , Martina Chiellini , Maria Febi , Giorgio Pomara , Claudio Bandini , Lorenzo Faggioni , Riccardo Lencioni , Emanuele Neri , Dania Cioni
{"title":"Radiomics-based machine learning role in differential diagnosis between small renal oncocytoma and clear cells carcinoma on contrast-enhanced CT: A pilot study","authors":"Roberto Francischello , Salvatore Claudio Fanni , Martina Chiellini , Maria Febi , Giorgio Pomara , Claudio Bandini , Lorenzo Faggioni , Riccardo Lencioni , Emanuele Neri , Dania Cioni","doi":"10.1016/j.ejro.2024.100604","DOIUrl":"10.1016/j.ejro.2024.100604","url":null,"abstract":"<div><h3>Purpose</h3><div>To investigate the potential role of radiomics-based machine learning in differentiating small renal oncocytoma (RO) from clear cells carcinoma (ccRCC) on contrast-enhanced CT (CECT).</div></div><div><h3>Material and methods</h3><div>Fifty-two patients with small renal masses who underwent CECT before surgery between January 2016 and December 2020 were retrospectively included in the study. At pathology examination 39 ccRCC and 13 RO were identified. All lesions were manually delineated unenhanced (B), arterial (A) and venous (V) phases. Radiomics features were extracted using three different fixed bin widths (bw) of 25 HU, 10 HU, and 5 HU from each phase (B, A, V), and with different combinations (B+A, B+V, B+A+V, A+V), leading to 21 different datasets. Montecarlo Cross Validation technique was used to quantify the estimator performance. The final model built using the hyperparameter selected with Optuna was trained again on the training set and the final performance evaluation was made on the test set.</div></div><div><h3>Results</h3><div>The A+V bw 10 achieved the greater median (IQR) balanced accuracy considering all the models of 0.70 (0.64–0.75), while A bw 10 considering only the monophasic ones. The A bw 10 model achieved a median (IQR) sensitivity of 0.60 (0.40–0.60), specificity of 0.80 (0.73–0.87), AUC-ROC of 0.77 (0.66–0.84), accuracy of 0.75 (0.70–0.80), and a Phi Coefficient of 0.38 (0.20–0.47). None of the nine models with the lowest mean balanced accuracy values implemented features from A.</div></div><div><h3>Conclusion</h3><div>The A bw 10 model was identified as the most efficient mono-phasic model in differentiating small RO from ccRCC.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"13 ","pages":"Article 100604"},"PeriodicalIF":1.8,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}