Journal of Digital Imaging最新文献

筛选
英文 中文
MRI-based Machine Learning Radiomics Can Predict CSF1R Expression Level and Prognosis in High-grade Gliomas 基于磁共振成像的机器学习放射组学可预测高级别胶质瘤的 CSF1R 表达水平和预后
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-24 DOI: 10.1007/s10278-023-00905-x
Yuling Lai, Yiyang Wu, Xiangyuan Chen, Wenchao Gu, Guoxia Zhou, Meilin Weng
{"title":"MRI-based Machine Learning Radiomics Can Predict CSF1R Expression Level and Prognosis in High-grade Gliomas","authors":"Yuling Lai, Yiyang Wu, Xiangyuan Chen, Wenchao Gu, Guoxia Zhou, Meilin Weng","doi":"10.1007/s10278-023-00905-x","DOIUrl":"https://doi.org/10.1007/s10278-023-00905-x","url":null,"abstract":"<p>The purpose of this study is to predict the mRNA expression of CSF1R in HGG non-invasively using MRI (magnetic resonance imaging) omics technology and to evaluate the correlation between the established radiomics model and prognosis. We investigated the predictive value of CSF1R in the Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA) database. The Support vector machine (SVM) and the Logistic regression (LR) algorithms were used to create a radiomics_score (Rad_score), respectively. The effectiveness and performance of the radiomics model was assessed in the training (n = 89) and tenfold cross-validation sets. We further analyzed the correlation between Rad_score and macrophage-related genes using Spearman correlation analysis. A radiomics nomogram combining the clinical factors and Rad_score was constructed to validate the radiomic signatures for individualized survival estimation and risk stratification. The results showed that CSF1R expression was markedly elevated in HGG tissues, which was related to worse prognosis. CSF1R expression was closely related to the abundance of infiltrating immune cells, such as macrophages. We identified nine features for establishing a radiomics model. The radiomics model predicting CSF1R achieved high AUC in training (0.768 in SVM and 0.792 in LR) and tenfold cross-validation sets (0.706 in SVM and 0.717 in LR). Rad_score was highly associated with tumor-related macrophage genes. A radiomics nomogram combining the Rad_score and clinical factors was constructed and revealed satisfactory performance. MRI-based Rad_score is a novel way to predict CSF1R expression and prognosis in high-grade glioma patients. The radiomics nomogram could optimize individualized survival estimation for HGG patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"27 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Risk Stratification in Early-Stage Endometrial Carcinoma: Significance of Multiparametric MRI Radiomics Model 预测早期子宫内膜癌的风险分层:多参数磁共振成像放射组学模型的意义
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-18 DOI: 10.1007/s10278-023-00936-4
Huan Meng, Yu-Feng Sun, Yu Zhang, Ya-Nan Yu, Jing Wang, Jia-Ning Wang, Lin-Yan Xue, Xiao-Ping Yin
{"title":"Predicting Risk Stratification in Early-Stage Endometrial Carcinoma: Significance of Multiparametric MRI Radiomics Model","authors":"Huan Meng, Yu-Feng Sun, Yu Zhang, Ya-Nan Yu, Jing Wang, Jia-Ning Wang, Lin-Yan Xue, Xiao-Ping Yin","doi":"10.1007/s10278-023-00936-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00936-4","url":null,"abstract":"<p>Endometrial carcinoma (EC) risk stratification prior to surgery is crucial for clinical treatment. In this study, we intend to evaluate the predictive value of radiomics models based on magnetic resonance imaging (MRI) for risk stratification and staging of early-stage EC. The study included 155 patients who underwent MRI examinations prior to surgery and were pathologically diagnosed with early-stage EC between January, 2020, and September, 2022. Three-dimensional radiomics features were extracted from segmented tumor images captured by MRI scans (including T2WI, CE-T1WI delayed phase, and ADC), with 1521 features extracted from each of the three modalities. Then, using five-fold cross-validation and a multilayer perceptron algorithm, these features were filtered using Pearson’s correlation coefficient to develop a prediction model for risk stratification and staging of EC. The performance of each model was assessed by analyzing ROC curves and calculating the AUC, accuracy, sensitivity, and specificity. In terms of risk stratification, the CE-T1 sequence demonstrated the highest predictive accuracy of 0.858 ± 0.025 and an AUC of 0.878 ± 0.042 among the three sequences. However, combining all three sequences resulted in enhanced predictive accuracy, reaching 0.881 ± 0.040, with a corresponding increase in the AUC to 0.862 ± 0.069. In the context of staging, the utilization of a combination involving T2WI with CE-T1WI led to a notably elevated predictive accuracy of 0.956 ± 0.020, surpassing the accuracy achieved when employing any singular feature. Correspondingly, the AUC was 0.979 ± 0.022. When incorporating all three sequences concurrently, the predictive accuracy reached 0.956 ± 0.000, accompanied by an AUC of 0.986 ± 0.007. It is noteworthy that this level of accuracy surpassed that of the radiologist, which stood at 0.832. The MRI radiomics model has the potential to accurately predict the risk stratification and early staging of EC.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"26 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic 3D Segmentation and Identification of Anomalous Aortic Origin of the Coronary Arteries Combining Multi-view 2D Convolutional Neural Networks 结合多视角二维卷积神经网络自动进行三维分割并识别异常的冠状动脉主动脉起源
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-17 DOI: 10.1007/s10278-023-00950-6
Ariel Fernando Pascaner, Antonio Rosato, Alice Fantazzini, Elena Vincenzi, Curzio Basso, Francesco Secchi, Mauro Lo Rito, Michele Conti
{"title":"Automatic 3D Segmentation and Identification of Anomalous Aortic Origin of the Coronary Arteries Combining Multi-view 2D Convolutional Neural Networks","authors":"Ariel Fernando Pascaner, Antonio Rosato, Alice Fantazzini, Elena Vincenzi, Curzio Basso, Francesco Secchi, Mauro Lo Rito, Michele Conti","doi":"10.1007/s10278-023-00950-6","DOIUrl":"https://doi.org/10.1007/s10278-023-00950-6","url":null,"abstract":"<p>This work aimed to automatically segment and classify the coronary arteries with either normal or anomalous origin from the aorta (AAOCA) using convolutional neural networks (CNNs), seeking to enhance and fasten clinician diagnosis. We implemented three single-view 2D Attention U-Nets with 3D view integration and trained them to automatically segment the aortic root and coronary arteries of 124 computed tomography angiographies (CTAs), with normal coronaries or AAOCA. Furthermore, we automatically classified the segmented geometries as normal or AAOCA using a decision tree model. For CTAs in the test set (<i>n</i> = 13), we obtained median Dice score coefficients of 0.95 and 0.84 for the aortic root and the coronary arteries, respectively. Moreover, the classification between normal and AAOCA showed excellent performance with accuracy, precision, and recall all equal to 1 in the test set. We developed a deep learning-based method to automatically segment and classify normal coronary and AAOCA. Our results represent a step towards an automatic screening and risk profiling of patients with AAOCA, based on CTA.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"176 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Avascular Necrosis of the Lunate from Radiographs Using a Deep-Learning Model 利用深度学习模型从 X 光片检测月骨血管性坏死
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-16 DOI: 10.1007/s10278-023-00964-0
Krista Wernér, Turkka Anttila, Sina Hulkkonen, Timo Viljakka, Ville Haapamäki, Jorma Ryhänen
{"title":"Detecting Avascular Necrosis of the Lunate from Radiographs Using a Deep-Learning Model","authors":"Krista Wernér, Turkka Anttila, Sina Hulkkonen, Timo Viljakka, Ville Haapamäki, Jorma Ryhänen","doi":"10.1007/s10278-023-00964-0","DOIUrl":"https://doi.org/10.1007/s10278-023-00964-0","url":null,"abstract":"<p>Deep-learning (DL) algorithms have the potential to change medical image classification and diagnostics in the coming decade. Delayed diagnosis and treatment of avascular necrosis (AVN) of the lunate may have a detrimental effect on patient hand function. The aim of this study was to use a segmentation-based DL model to diagnose AVN of the lunate from wrist postero-anterior radiographs. A total of 319 radiographs of the diseased lunate and 1228 control radiographs were gathered from Helsinki University Central Hospital database. Of these, 10% were separated to form a test set for model validation. MRI confirmed the absence of disease. In cases of AVN of the lunate, a hand surgeon at Helsinki University Hospital validated the accurate diagnosis using either MRI or radiography. For detection of AVN, the model had a sensitivity of 93.33% (95% confidence interval (CI) 77.93–99.18%), specificity of 93.28% (95% CI 87.18–97.05%), and accuracy of 93.28% (95% CI 87.99–96.73%). The area under the receiver operating characteristic curve was 0.94 (95% CI 0.88–0.99). Compared to three clinical experts, the DL model had better AUC than one clinical expert and only one expert had higher accuracy than the DL model. The results were otherwise similar between the model and clinical experts. Our DL model performed well and may be a future beneficial tool for screening of AVN of the lunate.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"1 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Validation of a 3D Resnet Model for Prediction of Lymph Node Metastasis in Head and Neck Cancer Patients 用于预测头颈部癌症患者淋巴结转移的三维 Resnet 模型的开发与验证
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-16 DOI: 10.1007/s10278-023-00938-2
Yi-Hui Lin, Chieh-Ting Lin, Ya-Han Chang, Yen-Yu Lin, Jen-Jee Chen, Chun-Rong Huang, Yu-Wei Hsu, Weir-Chiang You
{"title":"Development and Validation of a 3D Resnet Model for Prediction of Lymph Node Metastasis in Head and Neck Cancer Patients","authors":"Yi-Hui Lin, Chieh-Ting Lin, Ya-Han Chang, Yen-Yu Lin, Jen-Jee Chen, Chun-Rong Huang, Yu-Wei Hsu, Weir-Chiang You","doi":"10.1007/s10278-023-00938-2","DOIUrl":"https://doi.org/10.1007/s10278-023-00938-2","url":null,"abstract":"<p>The accurate diagnosis and staging of lymph node metastasis (LNM) are crucial for determining the optimal treatment strategy for head and neck cancer patients. We aimed to develop a 3D Resnet model and investigate its prediction value in detecting LNM. This study enrolled 156 head and neck cancer patients and analyzed 342 lymph nodes segmented from surgical pathologic reports. The patients’ clinical and pathological data related to the primary tumor site and clinical and pathology T and N stages were collected. To predict LNM, we developed a dual-pathway 3D Resnet model incorporating two Resnet models with different depths to extract features from the input data. To assess the model’s performance, we compared its predictions with those of radiologists in a test dataset comprising 38 patients. The study found that the dimensions and volume of LNM + were significantly larger than those of LNM-. Specifically, the <i>Y</i> and <i>Z</i> dimensions showed the highest sensitivity of 84.6% and specificity of 72.2%, respectively, in predicting LNM + . The analysis of various variations of the proposed 3D Resnet model demonstrated that Dual-3D-Resnet models with a depth of 34 achieved the highest AUC values of 0.9294. In the validation test of 38 patients and 86 lymph nodes dataset, the 3D Resnet model outperformed both physical examination and radiologists in terms of sensitivity (80.8% compared to 50.0% and 91.7%, respectively), specificity(90.0% compared to 88.5% and 65.4%, respectively), and positive predictive value (77.8% compared to 66.7% and 55.0%, respectively) in detecting individual LNM + . These results suggest that the 3D Resnet model can be valuable for accurately identifying LNM + in head and neck cancer patients. A prospective trial is needed to evaluate further the role of the 3D Resnet model in determining LNM + in head and neck cancer patients and its impact on treatment strategies and patient outcomes.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"25 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Attentive Graph Neural Network with Conditional Random Field for Diagnosis of Anterior Cruciate Ligament Tear 利用条件随机场诊断前十字韧带撕裂的轻量级注意力图神经网络
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-16 DOI: 10.1007/s10278-023-00944-4
Jiaoju Wang, Jiewen Luo, Jiehui Liang, Yangbo Cao, Jing Feng, Lingjie Tan, Zhengcheng Wang, Jingming Li, Alphonse Houssou Hounye, Muzhou Hou, Jinshen He
{"title":"Lightweight Attentive Graph Neural Network with Conditional Random Field for Diagnosis of Anterior Cruciate Ligament Tear","authors":"Jiaoju Wang, Jiewen Luo, Jiehui Liang, Yangbo Cao, Jing Feng, Lingjie Tan, Zhengcheng Wang, Jingming Li, Alphonse Houssou Hounye, Muzhou Hou, Jinshen He","doi":"10.1007/s10278-023-00944-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00944-4","url":null,"abstract":"<p>Anterior cruciate ligament (ACL) tears are prevalent orthopedic sports injuries and are difficult to precisely classify. Previous works have demonstrated the ability of deep learning (DL) to provide support for clinicians in ACL tear classification scenarios, but it requires a large quantity of labeled samples and incurs a high computational expense. This study aims to overcome the challenges brought by small and imbalanced data and achieve fast and accurate ACL tear classification based on magnetic resonance imaging (MRI) of the knee. We propose a lightweight attentive graph neural network (GNN) with a conditional random field (CRF), named the ACGNN, to classify ACL ruptures in knee MR images. A metric-based meta-learning strategy is introduced to conduct independent testing through multiple node classification tasks. We design a lightweight feature embedding network using a feature-based knowledge distillation method to extract features from the given images. Then, GNN layers are used to find the dependencies between samples and complete the classification process. The CRF is incorporated into each GNN layer to refine the affinities. To mitigate oversmoothing and overfitting issues, we apply self-boosting attention, node attention, and memory attention for graph initialization, node updating, and correlation across graph layers, respectively. Experiments demonstrated that our model provided excellent performance on both oblique coronal data and sagittal data with accuracies of 92.94% and 91.92%, respectively. Notably, our proposed method exhibited comparable performance to that of orthopedic surgeons during an internal clinical validation. This work shows the potential of our method to advance ACL diagnosis and facilitates the development of computer-aided diagnosis methods for use in clinical practice.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"3 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Segmentation of Multiple Types of Uterine Lesions in Magnetic Resonance Images Using a Sequential Deep Learning Method with Image-Level Annotations 使用带图像级注释的序列深度学习方法分割磁共振图像中的多种类型子宫病变
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-16 DOI: 10.1007/s10278-023-00931-9
Yu-meng Cui, Hua-li Wang, Rui Cao, Hong Bai, Dan Sun, Jiu-xiang Feng, Xue-feng Lu
{"title":"The Segmentation of Multiple Types of Uterine Lesions in Magnetic Resonance Images Using a Sequential Deep Learning Method with Image-Level Annotations","authors":"Yu-meng Cui, Hua-li Wang, Rui Cao, Hong Bai, Dan Sun, Jiu-xiang Feng, Xue-feng Lu","doi":"10.1007/s10278-023-00931-9","DOIUrl":"https://doi.org/10.1007/s10278-023-00931-9","url":null,"abstract":"<p>Fully supervised medical image segmentation methods use pixel-level labels to achieve good results, but obtaining such large-scale, high-quality labels is cumbersome and time consuming. This study aimed to develop a weakly supervised model that only used image-level labels to achieve automatic segmentation of four types of uterine lesions and three types of normal tissues on magnetic resonance images. The MRI data of the patients were retrospectively collected from the database of our institution, and the T2-weighted sequence images were selected and only image-level annotations were made. The proposed two-stage model can be divided into four sequential parts: the pixel correlation module, the class re-activation map module, the inter-pixel relation network module, and the Deeplab v3 + module. The dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average symmetric surface distance (ASSD) were employed to evaluate the performance of the model. The original dataset consisted of 85,730 images from 316 patients with four different types of lesions (i.e., endometrial cancer, uterine leiomyoma, endometrial polyps, and atypical hyperplasia of endometrium). A total number of 196, 57, and 63 patients were randomly selected for model training, validation, and testing. After being trained from scratch, the proposed model showed a good segmentation performance with an average DSC of 83.5%, HD of 29.3 mm, and ASSD of 8.83 mm, respectively. As far as the weakly supervised methods using only image-level labels are concerned, the performance of the proposed model is equivalent to the state-of-the-art weakly supervised methods.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"120 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing the Lung Graph-Based Machine Learning Model for Identification of Fibrotic Interstitial Lung Diseases 开发基于肺图的机器学习模型,用于识别纤维化间质性肺病
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-16 DOI: 10.1007/s10278-023-00909-7
Haishuang Sun, Min Liu, Anqi Liu, Mei Deng, Xiaoyan Yang, Han Kang, Ling Zhao, Yanhong Ren, Bingbing Xie, Rongguo Zhang, Huaping Dai
{"title":"Developing the Lung Graph-Based Machine Learning Model for Identification of Fibrotic Interstitial Lung Diseases","authors":"Haishuang Sun, Min Liu, Anqi Liu, Mei Deng, Xiaoyan Yang, Han Kang, Ling Zhao, Yanhong Ren, Bingbing Xie, Rongguo Zhang, Huaping Dai","doi":"10.1007/s10278-023-00909-7","DOIUrl":"https://doi.org/10.1007/s10278-023-00909-7","url":null,"abstract":"<p>Accurate detection of fibrotic interstitial lung disease (f-ILD) is conducive to early intervention. Our aim was to develop a lung graph-based machine learning model to identify f-ILD. A total of 417 HRCTs from 279 patients with confirmed ILD (156 f-ILD and 123 non-f-ILD) were included in this study. A lung graph-based machine learning model based on HRCT was developed for aiding clinician to diagnose f-ILD. In this approach, local radiomics features were extracted from an automatically generated geometric atlas of the lung and used to build a series of specific lung graph models. Encoding these lung graphs, a lung descriptor was gained and became as a characterization of global radiomics feature distribution to diagnose f-ILD. The Weighted Ensemble model showed the best predictive performance in cross-validation. The classification accuracy of the model was significantly higher than that of the three radiologists at both the CT sequence level and the patient level. At the patient level, the diagnostic accuracy of the model versus radiologists A, B, and C was 0.986 (95% CI 0.959 to 1.000), 0.918 (95% CI 0.849 to 0.973), 0.822 (95% CI 0.726 to 0.904), and 0.904 (95% CI 0.836 to 0.973), respectively. There was a statistically significant difference in AUC values between the model and 3 physicians (<i>p</i> &lt; 0.05). The lung graph-based machine learning model could identify f-ILD, and the diagnostic performance exceeded radiologists which could aid clinicians to assess ILD objectively.</p><h3 data-test=\"abstract-sub-heading\">Graphical Abstract</h3>\u0000<p>Given a sequence of HRCT slices from a patient, the lung field is first automatically extracted. Next, this lung region is divided into 36 sub-regions using geometric rules, obtaining a lung atlas. And then, the lung graph is built based on 3D radiomics features of each sub-region of the lung atlas. Finally, the model’s predictions were compared to the physicians’ assessment results.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"23 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139496105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text Report Analysis to Identify Opportunities for Optimizing Target Selection for Chest Radiograph Artificial Intelligence Models 通过文本报告分析确定优化胸片人工智能模型目标选择的机会
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00927-5
Carl Sabottke, Jason Lee, Alan Chiang, Bradley Spieler, Raza Mushtaq
{"title":"Text Report Analysis to Identify Opportunities for Optimizing Target Selection for Chest Radiograph Artificial Intelligence Models","authors":"Carl Sabottke, Jason Lee, Alan Chiang, Bradley Spieler, Raza Mushtaq","doi":"10.1007/s10278-023-00927-5","DOIUrl":"https://doi.org/10.1007/s10278-023-00927-5","url":null,"abstract":"<p>Our goal was to analyze radiology report text for chest radiographs (CXRs) to identify imaging findings that have the most impact on report length and complexity. Identifying these imaging findings can highlight opportunities for designing CXR AI systems which increase radiologist efficiency. We retrospectively analyzed text from 210,025 MIMIC-CXR reports and 168,949 reports from our local institution collected from 2019 to 2022. Fifty-nine categories of imaging finding keywords were extracted from reports using natural language processing (NLP), and their impact on report length was assessed using linear regression with and without LASSO regularization. Regression was also used to assess the impact of additional factors contributing to report length, such as the signing radiologist and use of terms of perception. For modeling CXR report word counts with regression, mean coefficient of determination, <i>R</i><sup>2</sup>, was 0.469 ± 0.001 for local reports and 0.354 ± 0.002 for MIMIC-CXR when considering only imaging finding keyword features. Mean <i>R</i><sup>2</sup> was significantly less at 0.067 ± 0.001 for local reports and 0.086 ± 0.002 for MIMIC-CXR, when only considering use of terms of perception. For a combined model for the local report data accounting for the signing radiologist, imaging finding keywords, and terms of perception, the mean <i>R</i><sup>2</sup> was 0.570 ± 0.002. With LASSO, highest value coefficients pertained to endotracheal tubes and pleural drains for local data and masses, nodules, and cavitary and cystic lesions for MIMIC-CXR. Natural language processing and regression analysis of radiology report textual data can highlight imaging targets for AI models which offer opportunities to bolster radiologist efficiency.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"34 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using the Textual Content of Radiological Reports to Detect Emerging Diseases: A Proof-of-Concept Study of COVID-19 利用放射报告的文本内容检测新发疾病:COVID-19 概念验证研究
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-12 DOI: 10.1007/s10278-023-00949-z
{"title":"Using the Textual Content of Radiological Reports to Detect Emerging Diseases: A Proof-of-Concept Study of COVID-19","authors":"","doi":"10.1007/s10278-023-00949-z","DOIUrl":"https://doi.org/10.1007/s10278-023-00949-z","url":null,"abstract":"<h3>Abstract</h3> <p>Changes in the content of radiological reports at population level could detect emerging diseases. Herein, we developed a method to quantify similarities in consecutive temporal groupings of radiological reports using natural language processing, and we investigated whether appearance of dissimilarities between consecutive periods correlated with the beginning of the COVID-19 pandemic in France. CT reports from 67,368 consecutive adults across 62 emergency departments throughout France between October 2019 and March 2020 were collected. Reports were vectorized using time frequency–inverse document frequency (TF-IDF) analysis on one-grams. For each successive 2-week period, we performed unsupervised clustering of the reports based on TF-IDF values and partition-around-medoids. Next, we assessed the similarities between this clustering and a clustering from two weeks before according to the average adjusted Rand index (AARI). Statistical analyses included (1) cross-correlation functions (CCFs) with the number of positive SARS-CoV-2 tests and advanced sanitary index for flu syndromes (ASI-flu, from open-source dataset), and (2) linear regressions of time series at different lags to understand the variations of AARI over time. Overall, 13,235 chest CT reports were analyzed. AARI was correlated with ASI-flu at lag = + 1, + 5, and + 6 weeks (<em>P</em> = 0.0454, 0.0121, and 0.0042, respectively) and with SARS-CoV-2 positive tests at lag = − 1 and 0 week (<em>P</em> = 0.0057 and 0.0001, respectively). In the best fit, AARI correlated with the ASI-flu with a lag of 2 weeks (<em>P</em> = 0.0026), SARS-CoV-2-positive tests in the same week (<em>P</em> &lt; 0.0001) and their interaction (<em>P</em> &lt; 0.0001) (adjusted <em>R</em><sup>2</sup> = 0.921). Thus, our method enables the automatic monitoring of changes in radiological reports and could help capturing disease emergence.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"45 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信