Journal of Digital Imaging最新文献

筛选
英文 中文
Deep Learning–based Diagnosis of Pulmonary Tuberculosis on Chest X-ray in the Emergency Department: A Retrospective Study 基于深度学习的急诊科胸部 X 光片肺结核诊断:一项回顾性研究
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00952-4
Chih-Hung Wang, Weishan Chang, Meng-Rui Lee, Joyce Tay, Cheng-Yi Wu, Meng-Che Wu, Holger R. Roth, Dong Yang, Can Zhao, Weichung Wang, Chien-Hua Huang
{"title":"Deep Learning–based Diagnosis of Pulmonary Tuberculosis on Chest X-ray in the Emergency Department: A Retrospective Study","authors":"Chih-Hung Wang, Weishan Chang, Meng-Rui Lee, Joyce Tay, Cheng-Yi Wu, Meng-Che Wu, Holger R. Roth, Dong Yang, Can Zhao, Weichung Wang, Chien-Hua Huang","doi":"10.1007/s10278-023-00952-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00952-4","url":null,"abstract":"<p>Prompt and correct detection of pulmonary tuberculosis (PTB) is critical in preventing its spread. We aimed to develop a deep learning–based algorithm for detecting PTB on chest X-ray (CXRs) in the emergency department. This retrospective study included 3498 CXRs acquired from the National Taiwan University Hospital (NTUH). The images were chronologically split into a training dataset, NTUH-1519 (images acquired during the years 2015 to 2019; <i>n</i> = 2144), and a testing dataset, NTUH-20 (images acquired during the year 2020; <i>n</i> = 1354). Public databases, including the NIH ChestX-ray14 dataset (model training; 112,120 images), Montgomery County (model testing; 138 images), and Shenzhen (model testing; 662 images), were also used in model development. EfficientNetV2 was the basic architecture of the algorithm. Images from ChestX-ray14 were employed for pseudo-labelling to perform semi-supervised learning. The algorithm demonstrated excellent performance in detecting PTB (area under the receiver operating characteristic curve [AUC] 0.878, 95% confidence interval [CI] 0.854–0.900) in NTUH-20. The algorithm showed significantly better performance in posterior-anterior (PA) CXR (AUC 0.940, 95% CI 0.912–0.965, <i>p-value</i> &lt; 0.001) compared with anterior–posterior (AUC 0.782, 95% CI 0.644–0.897) or portable anterior–posterior (AUC 0.869, 95% CI 0.814–0.918) CXR. The algorithm accurately detected cases of bacteriologically confirmed PTB (AUC 0.854, 95% CI 0.823–0.883). Finally, the algorithm tested favourably in Montgomery County (AUC 0.838, 95% CI 0.765–0.904) and Shenzhen (AUC 0.806, 95% CI 0.771–0.839). A deep learning–based algorithm could detect PTB on CXR with excellent performance, which may help shorten the interval between detection and airborne isolation for patients with PTB.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139423453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Delineation of Clinical Target Volumes for Cervical Cancer Radiotherapy on CT/MR Dual-Modality Images 在 CT/MR 双模态图像上可靠划分宫颈癌放疗临床靶区
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00951-5
Ying Sun, Yuening Wang, Kexin Gan, Yuxin Wang, Ying Chen, Yun Ge, Jie Yuan, Hanzi Xu
{"title":"Reliable Delineation of Clinical Target Volumes for Cervical Cancer Radiotherapy on CT/MR Dual-Modality Images","authors":"Ying Sun, Yuening Wang, Kexin Gan, Yuxin Wang, Ying Chen, Yun Ge, Jie Yuan, Hanzi Xu","doi":"10.1007/s10278-023-00951-5","DOIUrl":"https://doi.org/10.1007/s10278-023-00951-5","url":null,"abstract":"<p>Accurate delineation of the clinical target volume (CTV) is a crucial prerequisite for safe and effective radiotherapy characterized. This study addresses the integration of magnetic resonance (MR) images to aid in target delineation on computed tomography (CT) images. However, obtaining MR images directly can be challenging. Therefore, we employ AI-based image generation techniques to “intelligentially generate” MR images from CT images to improve CTV delineation based on CT images. To generate high-quality MR images, we propose an attention-guided single-loop image generation model. The model can yield higher-quality images by introducing an attention mechanism in feature extraction and enhancing the loss function. Based on the generated MR images, we propose a CTV segmentation model fusing multi-scale features through image fusion and a hollow space pyramid module to enhance segmentation accuracy. The image generation model used in this study improves the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) from 14.87 and 0.58 to 16.72 and 0.67, respectively, and improves the feature distribution distance and learning-perception image similarity from 180.86 and 0.28 to 110.98 and 0.22, achieving higher quality image generation. The proposed segmentation method demonstrates high accuracy, compared with the FCN method, the intersection over union ratio and the Dice coefficient are improved from 0.8360 and 0.8998 to 0.9043 and 0.9473, respectively. Hausdorff distance and mean surface distance decreased from 5.5573 mm and 2.3269 mm to 4.7204 mm and 0.9397 mm, respectively, achieving clinically acceptable segmentation accuracy. Our method might reduce physicians’ manual workload and accelerate the diagnosis and treatment process while decreasing inter-observer variability in identifying anatomical structures.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT-Based Intratumoral and Peritumoral Radiomics Nomograms for the Preoperative Prediction of Spread Through Air Spaces in Clinical Stage IA Non-small Cell Lung Cancer 基于CT的瘤内和瘤周放射omics提名图用于术前预测临床IA期非小细胞肺癌的气隙扩散情况
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00939-1
Yun Wang, Deng Lyu, Lei Hu, Junhong Wu, Shaofeng Duan, Taohu Zhou, Wenting Tu, Yi Xiao, Li Fan, Shiyuan Liu
{"title":"CT-Based Intratumoral and Peritumoral Radiomics Nomograms for the Preoperative Prediction of Spread Through Air Spaces in Clinical Stage IA Non-small Cell Lung Cancer","authors":"Yun Wang, Deng Lyu, Lei Hu, Junhong Wu, Shaofeng Duan, Taohu Zhou, Wenting Tu, Yi Xiao, Li Fan, Shiyuan Liu","doi":"10.1007/s10278-023-00939-1","DOIUrl":"https://doi.org/10.1007/s10278-023-00939-1","url":null,"abstract":"<p>The study aims to investigate the value of intratumoral and peritumoral radiomics and clinical-radiological features for predicting spread through air spaces (STAS) in patients with clinical stage IA non-small cell lung cancer (NSCLC). A total of 336 NSCLC patients from our hospital were randomly divided into the training cohort (<i>n</i> = 236) and the internal validation cohort (<i>n</i> = 100) at a ratio of 7:3, and 69 patients from the other two external hospitals were collected as the external validation cohort. Univariate and multivariate analyses were used to select clinical-radiological features and construct a clinical model. The GTV, PTV5, PTV10, PTV15, PTV20, GPTV5, GPTV10, GPTV15, and GPTV20 models were constructed based on intratumoral and peritumoral (5 mm, 10 mm, 15 mm, 20 mm) radiomics features. Additionally, the radscore of the optimal radiomics model and clinical-radiological predictors were used to construct a combined model and plot a nomogram. Lastly, the ROC curve and AUC value were used to evaluate the diagnostic performance of the model. Tumor density type (OR = 6.738) and distal ribbon sign (OR = 5.141) were independent risk factors for the occurrence of STAS. The GPTV10 model outperformed the other radiomics models, and its AUC values were 0.887, 0.876, and 0.868 in the three cohorts. The AUC values of the combined model constructed based on GPTV10 radscore and clinical-radiological predictors were 0.901, 0.875, and 0.878. DeLong test results revealed that the combined model was superior to the clinical model in the three cohorts. The nomogram based on GPTV10 radscore and clinical-radiological features exhibited high predictive efficiency for STAS status in NSCLC.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images 强大的医疗诊断:用于放射学图像中对抗性疾病检测的新型两阶段深度学习框架
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00916-8
Sheikh Burhan ul haque, Aasim Zafar
{"title":"Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images","authors":"Sheikh Burhan ul haque, Aasim Zafar","doi":"10.1007/s10278-023-00916-8","DOIUrl":"https://doi.org/10.1007/s10278-023-00916-8","url":null,"abstract":"<p>In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models’ decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model’s resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models’ resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robustness of Deep Networks for Mammography: Replication Across Public Datasets 用于乳腺 X 射线照相术的深度网络的鲁棒性:在公共数据集上复制
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00943-5
{"title":"Robustness of Deep Networks for Mammography: Replication Across Public Datasets","authors":"","doi":"10.1007/s10278-023-00943-5","DOIUrl":"https://doi.org/10.1007/s10278-023-00943-5","url":null,"abstract":"<h3>Abstract</h3> <p>Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (<em>N</em> = 826 exams). On the larger OMI-DB dataset (<em>N</em> = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Urinary Stone Detection System for Abdominal Non-Enhanced CT Images Reduces the Burden on Radiologists 腹部非增强 CT 图像尿石自动检测系统减轻了放射科医生的负担
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00946-2
Zhaoyu Xing, Zuhui Zhu, Zhenxing Jiang, Jingshi Zhao, Qin Chen, Wei Xing, Liang Pan, Yan Zeng, Aie Liu, Jiule Ding
{"title":"Automatic Urinary Stone Detection System for Abdominal Non-Enhanced CT Images Reduces the Burden on Radiologists","authors":"Zhaoyu Xing, Zuhui Zhu, Zhenxing Jiang, Jingshi Zhao, Qin Chen, Wei Xing, Liang Pan, Yan Zeng, Aie Liu, Jiule Ding","doi":"10.1007/s10278-023-00946-2","DOIUrl":"https://doi.org/10.1007/s10278-023-00946-2","url":null,"abstract":"<p>To develop a fully automatic urinary stone detection system (kidney, ureter, and bladder) and to test it in a real clinical environment. The local institutional review board approved this retrospective single-center study that used non-enhanced abdominopelvic CT scans from patients admitted urology (uPatients) and emergency (ePatients). The uPatients were randomly divided into training and validation sets in a ratio of 3:1. We designed a cascade urinary stone map location-feature pyramid networks (USm-FPNs) and innovatively proposed a ureter distance heatmap method to estimate the ureter position on non-enhanced CT to further reduce the false positives. The performances of the system were compared using the free-response receiver operating characteristic curve and the precision-recall curve. This study included 811 uPatients and 356 ePatients. At stone level, the cascade detector USm-FPNs has the mean of false positives per scan (mFP) 1.88 with the sensitivity 0.977 in validation set, and mFP was further reduced to 1.18 with the sensitivity 0.977 after combining the ureter distance heatmap. At patient level, the sensitivity and precision were as high as 0.995 and 0.990 in validation set, respectively. In a real clinical set of ePatients (27.5% of patients contain stones), the mFP was 1.31 with as high as sensitivity 0.977, and the diagnostic time reduced by &gt; 20% with the system help. A fully automatic detection system for entire urinary stones on non-enhanced CT scans was proposed and reduces obviously the burden on junior radiologists without compromising sensitivity in real emergency data.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Supported the Modified Gustafson’s Criteria for Dental Age Estimation in Southwest China 机器学习支持中国西南地区牙龄估计的改良古斯塔夫森标准
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00956-0
Xinhua Dai, Anjie Liu, Junhong Liu, Mengjun Zhan, Yuanyuan Liu, Wenchi Ke, Lei Shi, Xinyu Huang, Hu Chen, Zhenhua Deng, Fei Fan
{"title":"Machine Learning Supported the Modified Gustafson’s Criteria for Dental Age Estimation in Southwest China","authors":"Xinhua Dai, Anjie Liu, Junhong Liu, Mengjun Zhan, Yuanyuan Liu, Wenchi Ke, Lei Shi, Xinyu Huang, Hu Chen, Zhenhua Deng, Fei Fan","doi":"10.1007/s10278-023-00956-0","DOIUrl":"https://doi.org/10.1007/s10278-023-00956-0","url":null,"abstract":"<p>Adult age estimation is one of the most challenging problems in forensic science and physical anthropology. In this study, we aimed to develop and evaluate machine learning (ML) methods based on the modified Gustafson’s criteria for dental age estimation. In this retrospective study, a total of 851 orthopantomograms were collected from patients aged 15 to 40 years old. The secondary dentin formation (SE), periodontal recession (PE), and attrition (AT) of four mandibular premolars were analyzed according to the modified Gustafson’s criteria. Ten ML models were generated and compared for age estimation. The partial least squares regressor outperformed other models in males with a mean absolute error (MAE) of 4.151 years. The support vector regressor (MAE = 3.806 years) showed good performance in females. The accuracy of ML models is better than the single-tooth model provided in the previous studies (MAE = 4.747 years in males and MAE = 4.957 years in females). The Shapley additive explanations method was used to reveal the importance of the 12 features in ML models and found that AT and PE are the most influential in age estimation. The findings suggest that the modified Gustafson method can be effectively employed for adult age estimation in the southwest Chinese population. Furthermore, this study highlights the potential of machine learning models to assist experts in achieving accurate and interpretable age estimation.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images TiCNet:用于 CT 图像肺结节检测的卷积神经网络变换器
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00904-y
Ling Ma, Gen Li, Xingyu Feng, Qiliang Fan, Lizhi Liu
{"title":"TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images","authors":"Ling Ma, Gen Li, Xingyu Feng, Qiliang Fan, Lizhi Liu","doi":"10.1007/s10278-023-00904-y","DOIUrl":"https://doi.org/10.1007/s10278-023-00904-y","url":null,"abstract":"<p>Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An MRI-Based Deep Transfer Learning Radiomics Nomogram to Predict Ki-67 Proliferation Index of Meningioma 预测脑膜瘤 Ki-67 增殖指数的基于磁共振成像的深度迁移学习放射组学提名图
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00937-3
Chongfeng Duan, Dapeng Hao, Jiufa Cui, Gang Wang, Wenjian Xu, Nan Li, Xuejun Liu
{"title":"An MRI-Based Deep Transfer Learning Radiomics Nomogram to Predict Ki-67 Proliferation Index of Meningioma","authors":"Chongfeng Duan, Dapeng Hao, Jiufa Cui, Gang Wang, Wenjian Xu, Nan Li, Xuejun Liu","doi":"10.1007/s10278-023-00937-3","DOIUrl":"https://doi.org/10.1007/s10278-023-00937-3","url":null,"abstract":"<p>The objective of this study was to predict Ki-67 proliferation index of meningioma by using a nomogram based on clinical, radiomics, and deep transfer learning (DTL) features. A total of 318 cases were enrolled in the study. The clinical, radiomics, and DTL features were selected to construct models. The calculation of radiomics and DTL score was completed by using selected features and correlation coefficient. The deep transfer learning radiomics (DTLR) nomogram was constructed by selected clinical features, radiomics score, and DTL score. The area under the receiver operator characteristic curve (AUC) was calculated. The models were compared by Delong test of AUCs and decision curve analysis (DCA). The features of sex, size, and peritumoral edema were selected to construct clinical model. Seven radiomics features and 15 DTL features were selected. The AUCs of clinical, radiomics, DTL model, and DTLR nomogram were 0.746, 0.75, 0.717, and 0.779 respectively. DTLR nomogram had the highest AUC of 0.779 (95% CI 0.6643–0.8943) with an accuracy rate of 0.734, a sensitivity value of 0.719, and a specificity value of 0.75 in test set. There was no significant difference in AUCs among four models in Delong test. The DTLR nomogram had a larger net benefit than other models across all the threshold probability. The DTLR nomogram had a satisfactory performance in Ki-67 prediction and could be a new evaluation method of meningioma which would be useful in the clinical decision-making.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PET KinetiX—A Software Solution for PET Parametric Imaging at the Whole Field of View Level PET KinetiX--全视场 PET 参数成像软件解决方案
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-10 DOI: 10.1007/s10278-023-00965-z
Florent L. Besson, Sylvain Faure
{"title":"PET KinetiX—A Software Solution for PET Parametric Imaging at the Whole Field of View Level","authors":"Florent L. Besson, Sylvain Faure","doi":"10.1007/s10278-023-00965-z","DOIUrl":"https://doi.org/10.1007/s10278-023-00965-z","url":null,"abstract":"<p>Kinetic modeling represents the ultimate foundations of PET quantitative imaging, a unique opportunity to better characterize the diseases or prevent the reduction of drugs development. Primarily designed for research, parametric imaging based on PET kinetic modeling may become a reality in future clinical practice, enhanced by the technical abilities of the latest generation of commercially available PET systems. In the era of precision medicine, such paradigm shift should be promoted, regardless of the PET system. In order to anticipate and stimulate this emerging clinical paradigm shift, we developed a constructor-independent software package, called PET KinetiX, allowing a faster and easier computation of parametric images from any 4D PET DICOM series, at the whole field of view level. The PET KinetiX package is currently a plug-in for Osirix DICOM viewer. The package provides a suite of five PET kinetic models: Patlak, Logan, 1-tissue compartment model, 2-tissue compartment model, and first pass blood flow. After uploading the 4D-PET DICOM series into Osirix, the image processing requires very few steps: the choice of the kinetic model and the definition of an input function. After a 2-min process, the PET parametric and error maps of the chosen model are automatically estimated voxel-wise and written in DICOM format. The software benefits from the graphical user interface of Osirix, making it user-friendly. Compared to PMOD-PKIN (version 4.4) on twelve <sup>18</sup>F-FDG PET dynamic datasets, PET KinetiX provided an absolute bias of 0.1% (0.05–0.25) and 5.8% (3.3–12.3) for Ki<sub>Patlak</sub> and Ki<sub>2TCM</sub>, respectively. Several clinical research illustrative cases acquired on different hybrid PET systems (standard or extended axial fields of view, PET/CT, and PET/MRI), with different acquisition schemes (single-bed single-pass or multi-bed multipass), are also provided. PET KinetiX is a very fast and efficient independent research software that helps molecular imaging users easily and quickly produce 3D PET parametric images from any reconstructed 4D-PET data acquired on standard or large PET systems.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信