Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Deep Learning-Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors. 基于深度学习的 X 射线位置估计,自动设置 X 射线主因。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-14 DOI: 10.1007/s10278-024-01256-x
C F Del Cerro, R C Giménez, J García-Blas, K Sosenko, J M Ortega, M Desco, M Abella
{"title":"Deep Learning-Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors.","authors":"C F Del Cerro, R C Giménez, J García-Blas, K Sosenko, J M Ortega, M Desco, M Abella","doi":"10.1007/s10278-024-01256-x","DOIUrl":"10.1007/s10278-024-01256-x","url":null,"abstract":"<p><p>Radiation dose and image quality in radiology are influenced by the X-ray prime factors: KVp, mAs, and source-detector distance. These parameters are set by the X-ray technician prior to the acquisition considering the radiographic position. A wrong setting of these parameters may result in exposure errors, forcing the test to be repeated with the increase of the radiation dose delivered to the patient. This work presents a novel approach based on deep learning that automatically estimates the radiographic position from a photograph captured prior to X-ray exposure, which can then be used to select the optimal prime factors. We created a database using 66 radiographic positions commonly used in clinical settings, prospectively obtained during 2022 from 75 volunteers in two different X-ray facilities. The architecture for radiographic position classification was a lightweight version of ConvNeXt trained with fine-tuning, discriminative learning rates, and a one-cycle policy scheduler. Our resulting model achieved an accuracy of 93.17% for radiographic position classification and increased to 95.58% when considering the correct selection of prime factors, since half of the errors involved positions with the same KVp and mAs values. Most errors occurred for radiographic positions with similar patient pose in the photograph. Results suggest the feasibility of the method to facilitate the acquisition workflow reducing the occurrence of exposure errors while preventing unnecessary radiation dose delivered to patients.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1661-1668"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142485257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating VAI-Assisted Quantified CXRs and Multimodal Data to Assess the Risk of Mortality. 整合 VAI 辅助量化心血管造影和多模态数据,评估死亡风险。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-24 DOI: 10.1007/s10278-024-01247-y
Yu-Cheng Chen, Wen-Hui Fang, Chin-Sheng Lin, Dung-Jang Tsai, Chih-Wei Hsiang, Cheng-Kuang Chang, Kai-Hsiung Ko, Guo-Shu Huang, Yung-Tsai Lee, Chin Lin
{"title":"Integrating VAI-Assisted Quantified CXRs and Multimodal Data to Assess the Risk of Mortality.","authors":"Yu-Cheng Chen, Wen-Hui Fang, Chin-Sheng Lin, Dung-Jang Tsai, Chih-Wei Hsiang, Cheng-Kuang Chang, Kai-Hsiung Ko, Guo-Shu Huang, Yung-Tsai Lee, Chin Lin","doi":"10.1007/s10278-024-01247-y","DOIUrl":"10.1007/s10278-024-01247-y","url":null,"abstract":"<p><p>To address the unmet need for a widely available examination for mortality prediction, this study developed a foundation visual artificial intelligence (VAI) to enhance mortality risk stratification using chest X-rays (CXRs). The VAI employed deep learning to extract CXR features and a Cox proportional hazard model to generate a hazard score (\"CXR-risk\"). We retrospectively collected CXRs from patients visited outpatient department and physical examination center. Subsequently, we reviewed mortality and morbidity outcomes from electronic medical records. The dataset consisted of 41,945, 10,492, 31,707, and 4441 patients in the training, validation, internal test, and external test sets, respectively. During the median follow-up of 3.2 (IQR, 1.2-6.1) years of both internal and external test sets, the \"CXR-risk\" demonstrated C-indexes of 0.859 (95% confidence interval (CI), 0.851-0.867) and 0.870 (95% CI, 0.844-0.896), respectively. Patients with high \"CXR-risk,\" above 85th percentile, had a significantly higher risk of mortality than those with low risk, below 50th percentile. The addition of clinical and laboratory data and radiographic report further improved the predictive accuracy, resulting in C-indexes of 0.888 and 0.900. The VAI can provide accurate predictions of mortality and morbidity outcomes using just a single CXR, and it can complement other risk prediction indicators to assist physicians in assessing patient risk more effectively.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1581-1593"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092331/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knee Osteoarthritis SCAENet: Adaptive Knee Osteoarthritis Severity Assessment Using Spatial Separable Convolution with Attention-Based Ensemble Networks with Hybrid Optimization Strategy. 膝骨关节炎 SCAENet:利用空间可分离卷积与基于注意力的集合网络和混合优化策略进行自适应膝关节骨关节炎严重程度评估。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-22 DOI: 10.1007/s10278-024-01306-4
Sriramulu Devarapaga, Rajesh Thumma
{"title":"Knee Osteoarthritis SCAENet: Adaptive Knee Osteoarthritis Severity Assessment Using Spatial Separable Convolution with Attention-Based Ensemble Networks with Hybrid Optimization Strategy.","authors":"Sriramulu Devarapaga, Rajesh Thumma","doi":"10.1007/s10278-024-01306-4","DOIUrl":"10.1007/s10278-024-01306-4","url":null,"abstract":"<p><p>Osteoarthritis (OA) of the knee is a chronic state that significantly lowers the quality of life for its patients. Early detection and lifetime monitoring of the progression of OA are necessary for preventive therapy. In the course of therapy, the Kellgren and Lawrence (KL) assessment model categorizes the rigidity of OA. Deep techniques have recently been used to increase the precision and effectiveness of OA severity assessments. The training process is compromised by low-confidence samples, which are less accurate than normal ones. In this work, a deep learning-based knee osteoarthritis severity assessment model is recommended to accurately identify the condition in patients. The phases of the designed model are data collection, feature extraction, and prediction. At first, the images are generally gathered from online resources. The gathered images are given into the feature extraction phase. A new model is implemented to predict knee osteoarthritis named Spatial Separable Convolution with Attention-based Ensemble Networks (SCAENet), which includes feature extraction, stacked target-based feature pool generation, and knee osteoarthritis prediction. The feature extraction is done using ResNet, Visual Geometry Group (VGG16), and DenseNet. The stacked target-based feature pool is obtained from the SCAENet. Hence, the stacked target-based feature pool is obtained by the Hybridization of Equilibrium Slime Mould with Bald Eagle Search Optimization (HESM-BESO). Here, the knee osteoarthritis's severity prediction is performed using the dimensional convolutional neural network (1DCNN) technique. The designed SCAENet model is validated with other conventional methods to show high performance.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1563-1580"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence. 基于人工智能的超声引导下腰椎四头肌阻滞的自动分割。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-25 DOI: 10.1007/s10278-024-01267-8
Qiang Wang, Bingxi He, Jie Yu, Bowen Zhang, Jingchao Yang, Jin Liu, Xinwei Ma, Shijing Wei, Shuai Li, Hui Zheng, Zhenchao Tang
{"title":"Automatic Segmentation of Ultrasound-Guided Quadratus Lumborum Blocks Based on Artificial Intelligence.","authors":"Qiang Wang, Bingxi He, Jie Yu, Bowen Zhang, Jingchao Yang, Jin Liu, Xinwei Ma, Shijing Wei, Shuai Li, Hui Zheng, Zhenchao Tang","doi":"10.1007/s10278-024-01267-8","DOIUrl":"10.1007/s10278-024-01267-8","url":null,"abstract":"<p><p>Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1362-1373"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images. 利用集合模型和随访数据,从 DSC-PWI 图像的放射学特征准确预测 mRS 评分。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-04 DOI: 10.1007/s10278-024-01280-x
Mazen M Yassin, Asim Zaman, Jiaxi Lu, Huihui Yang, Anbo Cao, Haseeb Hassan, Taiyu Han, Xiaoqiang Miao, Yongkang Shi, Yingwei Guo, Yu Luo, Yan Kang
{"title":"Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images.","authors":"Mazen M Yassin, Asim Zaman, Jiaxi Lu, Huihui Yang, Anbo Cao, Haseeb Hassan, Taiyu Han, Xiaoqiang Miao, Yongkang Shi, Yingwei Guo, Yu Luo, Yan Kang","doi":"10.1007/s10278-024-01280-x","DOIUrl":"10.1007/s10278-024-01280-x","url":null,"abstract":"<p><p>Predicting long-term clinical outcomes based on the early DSC PWI MRI scan is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict multilabel 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by combining ensemble models and different configurations of radiomic features generated from Dynamic susceptibility contrast perfusion-weighted imaging. In Follow-up studies, a total of 70 acute ischemic stroke (AIS) patients underwent magnetic resonance imaging within 24 hours poststroke and had a follow-up scan. In the single study, 150 DSC PWI Image scans for AIS patients. The DRF are extracted from DSC-PWI Scans. Then Lasso algorithm is applied for feature selection, then new features are generated from initial and follow-up scans. Then we applied different ensemble models to classify between three classes normal outcome (0, 1 mRS score), moderate outcome (2,3,4 mRS score), and severe outcome (5,6 mRS score). ANOVA and post-hoc Tukey HSD tests confirmed significant differences in model style performance across various studies and classification techniques. Stacking models consistently on average outperformed others, achieving an Accuracy of 0.68 ± 0.15, Precision of 0.68 ± 0.17, Recall of 0.65 ± 0.14, and F1 score of 0.63 ± 0.15 in the follow-up time study. Techniques like Bo_Smote showed significantly higher recall and F1 scores, highlighting their robustness and effectiveness in handling imbalanced data. Ensemble models, particularly Bagging and Stacking, demonstrated superior performance, achieving nearly 0.93 in Accuracy, 0.95 in Precision, 0.94 in Recall, and 0.94 in F1 metrics in follow-up conditions, significantly outperforming single models. Ensemble models based on radiomics generated from combining Initial and follow-up scans can be used to predict multilabel 90-day stroke outcomes with reduced subjectivity and user burden.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1467-1483"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-Integrated Hybrid Convolutional Neural Network for Dose Prediction in Nasopharyngeal Carcinoma Radiotherapy. 用于鼻咽癌放疗剂量预测的变压器集成混合卷积神经网络
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-18 DOI: 10.1007/s10278-024-01296-3
Xiangchen Li, Yanhua Liu, Feixiang Zhao, Feng Yang, Wang Luo
{"title":"Transformer-Integrated Hybrid Convolutional Neural Network for Dose Prediction in Nasopharyngeal Carcinoma Radiotherapy.","authors":"Xiangchen Li, Yanhua Liu, Feixiang Zhao, Feng Yang, Wang Luo","doi":"10.1007/s10278-024-01296-3","DOIUrl":"10.1007/s10278-024-01296-3","url":null,"abstract":"<p><p>Radiotherapy is recognized as the major treatment of nasopharyngeal carcinoma. Rapid and accurate dose prediction can improve the efficiency of the treatment planning process and the quality of radiotherapy plans. Currently, deep learning-based methods have been widely applied to dose prediction for radiotherapy treatment planning. However, it is important to note that existing models based on Convolutional Neural Networks (CNN) often overlook long-distance information. Although some studies try to use Transformer to solve the problem, it lacks the ability of CNN to process the spatial information inherent in images. Therefore, we propose a novel CNN and Transformer hybrid dose prediction model. To enhance the transmission ability of features between CNN and Transformer, we design a hierarchical dense recurrent encoder with a channel attention mechanism. Additionally, we propose a progressive decoder that preserves richer texture information through layer-wise reconstruction of high-dimensional feature maps. The proposed model also introduces object-driven skip connections, which facilitate the flow of information between the encoder and decoder. Experiments are conducted on in-house datasets, and the results show that the proposed model is superior to baseline methods in most dosimetric criteria. In addition, the image analysis metrics including PSNR, SSIM, and NRMSE demonstrate that the proposed model is consistent with ground truth and produces promising visual effects compared to other advanced methods. The proposed method could be taken as a powerful clinical guidance tool for physicists, significantly enhancing the efficiency of radiotherapy planning. The source code is available at https://github.com/CDUTJ102/THCN-Net .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1531-1551"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142485261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification. 利用伪彩色图像提高基于深度迁移学习的计算机辅助诊断方案在乳腺肿块分类中的性能。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-25 DOI: 10.1007/s10278-024-01237-0
Meredith A Jones, Ke Zhang, Rowzat Faiz, Warid Islam, Javier Jo, Bin Zheng, Yuchen Qiu
{"title":"Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification.","authors":"Meredith A Jones, Ke Zhang, Rowzat Faiz, Warid Islam, Javier Jo, Bin Zheng, Yuchen Qiu","doi":"10.1007/s10278-024-01237-0","DOIUrl":"10.1007/s10278-024-01237-0","url":null,"abstract":"<p><p>The purpose of this study is to investigate the impact of using morphological information in classifying suspicious breast lesions. The widespread use of deep transfer learning can significantly improve the performance of the mammogram based CADx schemes. However, digital mammograms are grayscale images, while deep learning models are typically optimized using the natural images containing three channels. Thus, it is needed to convert the grayscale mammograms into three channel images for the input of deep transfer models. This study aims to develop a novel pseudo color image generation method which utilizes the mass contour information to enhance the classification performance. Accordingly, a total of 830 breast cancer cases were retrospectively collected, which contains 310 benign and 520 malignant cases, respectively. For each case, a total of four regions of interest (ROI) are collected from the grayscale images captured for both the CC and MLO views of the two breasts. Meanwhile, a total of seven pseudo color image sets are generated as the input of the deep learning models, which are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass. Accordingly, the output features from four identical pre-trained deep learning models are concatenated and then processed by a support vector machine-based classifier to generate the final benign/malignant labels. The performance of each image set was evaluated and compared. The results demonstrate that the pseudo color sets containing the manually segmented mass performed significantly better than all other pseudo color sets, which achieved an AUC (area under the ROC curve) up to 0.889 ± 0.012 and an overall accuracy up to 0.816 ± 0.020, respectively. At the same time, the performance improvement is also dependent on the accuracy of the mass segmentation. The results of this study support our hypothesis that adding accurately segmented mass contours can provide complementary information, thereby enhancing the performance of the deep transfer model in classifying suspicious breast lesions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1871-1880"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Large Language Model to Detect Negated Expressions in Radiology Reports. 检测放射学报告中否定表达的大型语言模型。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-25 DOI: 10.1007/s10278-024-01274-9
Yvonne Su, Yonatan B Babore, Charles E Kahn
{"title":"A Large Language Model to Detect Negated Expressions in Radiology Reports.","authors":"Yvonne Su, Yonatan B Babore, Charles E Kahn","doi":"10.1007/s10278-024-01274-9","DOIUrl":"10.1007/s10278-024-01274-9","url":null,"abstract":"<p><p>Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1297-1303"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092861/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ocular Imaging Challenges, Current State, and a Path to Interoperability: A HIMSS-SIIM Enterprise Imaging Community Whitepaper. 眼科成像的挑战、现状和互操作性之路:HIMSS-SIIM 企业成像社区白皮书》。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-01 DOI: 10.1007/s10278-024-01261-0
Kerry E Goetz, Michael V Boland, Zhongdi Chu, Amberlynn A Reed, Shawn D Clark, Alexander J Towbin, Boonkit Purt, Kevin O'Donnell, Marilyn M Bui, Monief Eid, Christopher J Roth, Damien M Luviano, Les R Folio
{"title":"Ocular Imaging Challenges, Current State, and a Path to Interoperability: A HIMSS-SIIM Enterprise Imaging Community Whitepaper.","authors":"Kerry E Goetz, Michael V Boland, Zhongdi Chu, Amberlynn A Reed, Shawn D Clark, Alexander J Towbin, Boonkit Purt, Kevin O'Donnell, Marilyn M Bui, Monief Eid, Christopher J Roth, Damien M Luviano, Les R Folio","doi":"10.1007/s10278-024-01261-0","DOIUrl":"10.1007/s10278-024-01261-0","url":null,"abstract":"<p><p>Office-based testing, enhanced by advances in imaging technology, is routinely used in eye care to non-invasively assess ocular structure and function. This type of imaging coupled with autonomous artificial intelligence holds immense opportunity to diagnose eye diseases quickly. Despite the wide availability and use of ocular imaging, there are several factors that hinder optimization of clinical practice and patient care. While some large institutions have developed end-to-end digital workflows that utilize electronic health records, enterprise imaging archives, and dedicated diagnostic viewers, this experience has not yet made its way to smaller and independent eye clinics. Fractured interoperability practices impact patient care in all healthcare domains, including eye care where there is a scarcity of care centers, making collaboration essential among providers, specialists, and primary care who might be treating systemic conditions with profound impact on vision. The purpose of this white paper is to describe the current state of ocular imaging by focusing on the challenges related to interoperability, reporting, and clinical workflow.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1283-1290"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092316/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification. 深度共形监督:利用中间特征进行稳健的不确定性量化。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-07 DOI: 10.1007/s10278-024-01286-5
Amir M Vahdani, Shahriar Faghani
{"title":"Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification.","authors":"Amir M Vahdani, Shahriar Faghani","doi":"10.1007/s10278-024-01286-5","DOIUrl":"10.1007/s10278-024-01286-5","url":null,"abstract":"<p><p>Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1860-1870"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信