Huanjie Lin, Yubiao Yue, Li Xie, Bingbing Chen, Weifeng Li, Fan Yang, Qinrong Zhang, Huai Chen
{"title":"Multimodal deep learning-based radiomics for meningioma consistency prediction: integrating T1 and T2 MRI in a multi-center study.","authors":"Huanjie Lin, Yubiao Yue, Li Xie, Bingbing Chen, Weifeng Li, Fan Yang, Qinrong Zhang, Huai Chen","doi":"10.1186/s12880-025-01787-x","DOIUrl":"https://doi.org/10.1186/s12880-025-01787-x","url":null,"abstract":"<p><strong>Background: </strong>Meningioma consistency critically impacts surgical planning, as soft tumors are easier to resect than hard tumors. Current assessments of tumor consistency using MRI are subjective and lack quantitative accuracy. Integrating deep learning and radiomics could enhance the predictive accuracy of meningioma consistency.</p><p><strong>Methods: </strong>A retrospective study analyzed 204 meningioma patients from two centers: the Second Affiliated Hospital of Guangzhou Medical University and the Southern Theater Command Hospital PLA. Three models-a radiomics model (Rad_Model), a deep learning model (DL_Model), and a combined model (DLR_Model)-were developed. Model performance was evaluated using AUC, accuracy, sensitivity, specificity, and precision.</p><p><strong>Results: </strong>The DLR_Model outperformed other models across all cohorts. In the training set, it achieved AUC 0.957, accuracy of 0.908, and precision of 0.965. In the external test cohort, it maintained superior performance with an AUC of 0.854, accuracy of 0.778, and precision of 0.893, surpassing both the Rad_Model (AUC = 0.768) and DL_Model (AUC = 0.720). Combining radiomics and deep learning features improved predictive performance and robustness.</p><p><strong>Conclusion: </strong>Our study introduced and evaluated a deep learning radiomics model (DLR-Model) to accurately predict the consistency of meningiomas, which has the potential to improve preoperative assessments and surgical planning.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"216"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-domain subcortical brain structure segmentation algorithm based on low-rank adaptation fine-tuning SAM.","authors":"Yuan Sui, Qian Hu, Yujie Zhang","doi":"10.1186/s12880-025-01779-x","DOIUrl":"https://doi.org/10.1186/s12880-025-01779-x","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and robust segmentation of anatomical structures in brain MRI provides a crucial basis for the subsequent observation, analysis, and treatment planning of various brain diseases. Deep learning foundation models trained and designed on large-scale natural scene image datasets experience significant performance degradation when applied to subcortical brain structure segmentation in MRI, limiting their direct applicability in clinical diagnosis.</p><p><strong>Methods: </strong>This paper proposes a subcortical brain structure segmentation algorithm based on Low-Rank Adaptation (LoRA) to fine-tune SAM (Segment Anything Model) by freezing SAM's image encoder and applying LoRA to approximate low-rank matrix updates to the encoder's training weights, while also fine-tuning SAM's lightweight prompt encoder and mask decoder.</p><p><strong>Results: </strong>The fine-tuned model's learnable parameters (5.92 MB) occupy only 6.39% of the original model's parameter size (92.61 MB). For training, model preheating is employed to stabilize the fine-tuning process. During inference, adaptive prompt learning with point or box prompts is introduced to enhance the model's accuracy for arbitrary brain MRI segmentation.</p><p><strong>Conclusion: </strong>This interactive prompt learning approach provides clinicians with a means of intelligent segmentation for deep brain structures, effectively addressing the challenges of limited data labels and high manual annotation costs in medical image segmentation. We use five MRI datasets of IBSR, MALC, LONI, LPBA, Hammers and CANDI for experiments across various segmentation scenarios, including cross-domain settings with inference samples from diverse MRI datasets and supervised fine-tuning settings, demonstrate the proposed segmentation algorithm's generalization and effectiveness when compared to current mainstream and supervised segmentation algorithms.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"248"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhen Wang, Zhao-Qing Fan, Li-Ze Wang, Kun Cao, Rong Long, Yao Luo, Xiao-Ting Li, Liang You, Qing-Yang Li, Ying-Shi Sun
{"title":"Predict status of axillary lymph node after neoadjuvant chemotherapy with dual-energy CT in breast cancer.","authors":"Zhen Wang, Zhao-Qing Fan, Li-Ze Wang, Kun Cao, Rong Long, Yao Luo, Xiao-Ting Li, Liang You, Qing-Yang Li, Ying-Shi Sun","doi":"10.1186/s12880-025-01799-7","DOIUrl":"https://doi.org/10.1186/s12880-025-01799-7","url":null,"abstract":"<p><strong>Background: </strong>A proportion of breast patients achieve axillary pathological complete response (pCR) following NAC. However, few studies have investigated the potential of quantitative parameters derived from dual-energy CT (DECT) for predicting axillary lymph node (ALN) downstaging after NAC.</p><p><strong>Methods: </strong>This study included a prospective training and retrospective validation cohort from December 2019 to June 2022. Both groups enrolled invasive breast cancer with biopsy-proved metastatic ALNs who underwent contrast-enhanced DECT and NAC followed by surgery. A metastatic ALN, named target lymph node (TLN), was marked with metal clip at baseline. Quantitative DECT parameters and size of TLN, and clinical information were compared between pCR and non-pCR node group referring to postoperative pathology. Three predictive models, clinical, quantitative CT, and combinational models, were built by logistic regression and nomogram was drawn accordingly. The performance was evaluated by the receiver operator characteristic curve and clinical usefulness was assessed by decision curve analysis.</p><p><strong>Results: </strong>A total of 75 and 53 patients were included in training and validation cohort respectively. Of them, 34 (45.3%) and 22 (41.5%) patients achieved nodal pCR in the two sets. Multivariable analyses revealed that negative estrogen receptor expression, parenchyma thickness and the iodine concentration of TLN at post-NAC CT were independently predictive factors for pCR. The combinational model showed discriminatory power than the single clinical model (AUC, 0.724; p = 0.003) and quantitative CT model (AUC, 0.728; p = 0.030) with AUC of 0.847 and 0.828 in training and validation cohort. It provided enhanced net benefits within a wide range of threshold probabilities.</p><p><strong>Conclusion: </strong>Quantitative DECT parameters can be used to evaluate axillary nodal status after NAC and guide personalized treatment strategies.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"233"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hosein Chegeni, Vahid Khani, Jalal Kargar, Abbas Alibakhshi, Khosro Shamsi, Hojat Ebrahiminik, Reza Gerami
{"title":"Diagnostic value of preoperative advanced doppler imaging with cervical maneuvers in the detection of central cervical lymph node metastasis in papillary thyroid carcinoma.","authors":"Hosein Chegeni, Vahid Khani, Jalal Kargar, Abbas Alibakhshi, Khosro Shamsi, Hojat Ebrahiminik, Reza Gerami","doi":"10.1186/s12880-025-01750-w","DOIUrl":"https://doi.org/10.1186/s12880-025-01750-w","url":null,"abstract":"<p><strong>Objective: </strong>This study assesses the diagnostic value of preoperative greyscale and Doppler imaging and their combinative use with simultaneous advanced Doppler imaging and cervical maneuvers in detecting central cervical lymph node metastasis in papillary thyroid carcinoma patients.</p><p><strong>Methods: </strong>In this cross-sectional survey, we included candidates for total or partial thyroidectomy with concomitant cervical lymph node dissection who referred to the TIRAD imaging center from February 2022 to September 2023 with papillary thyroid carcinoma diagnosis. Patients underwent preoperative ultrasonographic examination using the Aixplorer device (Supersonic Imagine, France) with a linear array transducer of 7.5-16 MHz to identify potential metastasis within the cervical lymph nodes. Ultrasonic assessments are presented using the totaling attributes such as sensitivity, specificity, positive and negative predictive values, and likelihood ratios.</p><p><strong>Results: </strong>The post-operation pathology results showed metastasis in 85 (42.5%) patients. Standard imaging protocol without cervical approaches and advanced Doppler imaging capability detected metastatic involvement in 34 (17.0%) subjects. Meanwhile, the modified approach utilizing advanced Doppler imaging capability and cervical maneuvers identified metastatic involvement in 84 (42.0%) cases. The preoperative sensitivity without advanced Doppler imaging and maneuvers was 35.3%, specificity - 96.5%, positive predictive value - 88.2%, and negative predictive value - 66.9%. The introduction of advanced Doppler imaging and maneuvers yielded a sensitivity of 97.6%, specificity - of 99.1%, positive predictive value - 98.8%, and negative predictive value - 98.3%.</p><p><strong>Conclusion: </strong>Advanced Doppler imaging can improve the visualization of the cervical areas, due to its ultrafast and ultrasensitive perception qualities, facilitating the early recognition of vascular pattern changes.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"214"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengwei Ma, Weimin Xu, Jun Yang, Bowen Zheng, Chanjuan Wen, Sina Wang, Zeyuan Xu, Genggeng Qin, Weiguo Chen
{"title":"Contrast-enhanced mammography-based interpretable machine learning model for the prediction of the molecular subtype breast cancers.","authors":"Mengwei Ma, Weimin Xu, Jun Yang, Bowen Zheng, Chanjuan Wen, Sina Wang, Zeyuan Xu, Genggeng Qin, Weiguo Chen","doi":"10.1186/s12880-025-01765-3","DOIUrl":"https://doi.org/10.1186/s12880-025-01765-3","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to establish a machine learning prediction model to explore the correlation between contrast-enhanced mammography (CEM) imaging features and molecular subtypes of mass-type breast cancer.</p><p><strong>Materials and methods: </strong>This retrospective study included women with breast cancer who underwent CEM preoperatively between 2018 and 2021. We included 241 patients, which were randomly assigned to either a training or a test set in a 7:3 ratio. Twenty-one features were visually described, including four clinical features and seventeen radiological features, these radiological features which extracted from the CEM. Three binary classifications of subtypes were performed: Luminal vs. non-Luminal, HER2-enriched vs. non-HER2-enriched, and triple-negative (TNBC) vs. non-triple-negative. A multinomial naive Bayes (MNB) machine learning scheme was employed for the classification, and the least absolute shrink age and selection operator method were used to select the most predictive features for the classifiers. The classification performance was evaluated using the area under the receiver operating characteristic curve. We also utilized SHapley Additive exPlanation (SHAP) values to explain the prediction model.</p><p><strong>Results: </strong>The model that used a combination of low energy (LE) and dual-energy subtraction (DES) achieved the best performance compared to using either of the two images alone, yielding an area under the receiver operating characteristic curve of 0.798 for Luminal vs. non-Luminal subtypes, 0.695 for TNBC vs. non-TNBC, and 0.773 for HER2-enriched vs. non-HER2-enriched. The SHAP algorithm shows that \"LE_mass_margin_spiculated,\" \"DES_mass_enhanced_margin_spiculated,\" and \"DES_mass_internal_enhancement_homogeneous\" have the most significant impact on the model's performance in predicting Luminal and non-Luminal breast cancer. \"mass_calcification_relationship_no,\" \"calcification_ type_no,\" and \"LE_mass_margin_spiculated\" have a considerable impact on the model's performance in predicting HER2 and non-HER2 breast cancer.</p><p><strong>Conclusions: </strong>The radiological characteristics of breast tumors extracted from CEM were found to be associated with breast cancer subtypes in our study. Future research is needed to validate these findings.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"255"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hilal Er Ulubaba, İpek Atik, Rukiye Çiftçi, Özgür Eken, Monira I Aldhahi
{"title":"Deep learning for gender estimation using hand radiographs: a comparative evaluation of CNN models.","authors":"Hilal Er Ulubaba, İpek Atik, Rukiye Çiftçi, Özgür Eken, Monira I Aldhahi","doi":"10.1186/s12880-025-01809-8","DOIUrl":"https://doi.org/10.1186/s12880-025-01809-8","url":null,"abstract":"<p><strong>Background: </strong>Accurate gender estimation plays a crucial role in forensic identification, especially in mass disasters or cases involving fragmented or decomposed remains where traditional skeletal landmarks are unavailable. This study aimed to develop a deep learning-based model for gender classification using hand radiographs, offering a rapid and objective alternative to conventional methods.</p><p><strong>Methods: </strong>We analyzed 470 left-hand X-ray images from adults aged 18 to 65 years using four convolutional neural network (CNN) architectures: ResNet-18, ResNet-50, InceptionV3, and EfficientNet-B0. Following image preprocessing and data augmentation, models were trained and validated using standard classification metrics: accuracy, precision, recall, and F1 score. Data augmentation included random rotation, horizontal flipping, and brightness adjustments to enhance model generalization.</p><p><strong>Results: </strong>Among the tested models, ResNet-50 achieved the highest classification accuracy (93.2%) with precision of 92.4%, recall of 93.3%, and F1 score of 92.5%. While other models demonstrated acceptable performance, ResNet-50 consistently outperformed them across all metrics. These findings suggest CNNs can reliably extract sexually dimorphic features from hand radiographs.</p><p><strong>Conclusions: </strong>Deep learning approaches, particularly ResNet-50, provide a robust, scalable, and efficient solution for gender prediction from hand X-ray images. This method may serve as a valuable tool in forensic scenarios where speed and reliability are critical. Future research should validate these findings across diverse populations and incorporate explainable AI techniques to enhance interpretability.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"260"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sifat Ullah, Ali Javed, Muteb Aljasem, Abdul Khader Jilani Saudagar
{"title":"Eff-ReLU-Net: a deep learning framework for multiclass wound classification.","authors":"Sifat Ullah, Ali Javed, Muteb Aljasem, Abdul Khader Jilani Saudagar","doi":"10.1186/s12880-025-01785-z","DOIUrl":"https://doi.org/10.1186/s12880-025-01785-z","url":null,"abstract":"<p><p>Chronic wounds have emerged as a significant medical challenge due to their adverse effects, including infections leading to amputations. Over the past few years, the prevalence of chronic wounds has grown, thus posing significant health hazards. It is now becoming necessary to automate the wound assessment mechanism to limit the dependence of healthcare practitioners on manual methods. Therefore, a need exists for developing an effective wound classifier that enables practitioners to classify wounds quickly and reliably. This work proposed Eff-ReLU-Net, an improved EfficientNet-B0-based deep learning model for accurately identifying multiple categories of wounds. More precisely, we introduced the ReLU activation function over the Swish in our Eff-ReLU-Net because of its simplicity, reliability, and efficiency. Additionally, we introduced three fully connected dense layers at the end to reliably capture more distinct features, leading to improved multi-class wound classification. We also employed augmentation approaches such as fixed-angle rotations at 90°, 180°, and 270°, rotational invariance, random rotation, and translation to improve data diversity and samples for better model generalization and combating overfitting. The proposed model's effectiveness is assessed utilizing the publicly available AZH and Medetec wound datasets. We also conducted the cross-corpora evaluation to show the generalizability of our method. The proposed model achieved an accuracy, precision, recall, and F1-score of 92.33%, 97.66%, 95.33%, and 96.48% on Medetec, respectively. However, for the AZH dataset, the attained accuracy, precision, recall, and F1-score are 90%, 89.45%, 92,19%, and 90.84%, respectively. These results validate the effectiveness of our proposed Eff-ReLU-Net method for classifying chronic wounds.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"257"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingming Liu, Xingzheng Zheng, Na Mo, Yang Liu, Erhu Jin, Yuting Liang
{"title":"The value of diffusion-weighted imaging and semi-quantitative dynamic contrast-enhanced MRI in predicting the efficacy of medroxyprogesterone acetate treatment for atypical endometrial hyperplasia and endometrial carcinoma.","authors":"Mingming Liu, Xingzheng Zheng, Na Mo, Yang Liu, Erhu Jin, Yuting Liang","doi":"10.1186/s12880-025-01754-6","DOIUrl":"https://doi.org/10.1186/s12880-025-01754-6","url":null,"abstract":"<p><strong>Background: </strong>It will be important to noninvasively evaluate the efficacy of treatment for patients with atypical endometrial hyperplasia (AEH) and endometrial carcinoma (EC) who wish to have children. The study aimed to explore the feasibility of diffusion-weighted imaging (DWI) and semi-quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in predicting the efficacy of medroxyprogesterone acetate treatment for AEH and EC.</p><p><strong>Methods: </strong>A retrospective analysis was conducted on the clinical-pathological data of 6 patients with AEH and 6 patients with EC. The treatment effects of medroxyprogesterone acetate were pathologically evaluated. Additionally, MRI examination was conducted at each follow-up at the 3rd and 6th month after treatment. Repeated measures variance analysis was used to compare statistically significant differences in the apparent diffusion coefficient (ADC) values and maximum signal difference (MSD) of the lesion and corresponding endometrial site before treatment, and at the 3rd and 6th month after treatment. Endometrial thickness was analyzed utilizing the Friedman test. Furthermore, Fisher's exact probability method was used to determine if there was a significant difference in the time-intensity curve (TIC).</p><p><strong>Results: </strong>There was a statistically significant difference in endometrial thickness before treatment, and at the 3rd and 6th month after treatment for EC and AEH (P < 0.017). There was a statistically significant difference in the ADC values before treatment, and at the 3rd or 6th month after treatment for EC (P < 0.017). There was also a statistically significant difference in the type of TIC curve before and after treatment for EC (P < 0.001). However, the difference in MSD values was insignificant for EC and AEH before and after treatment (P > 0.05). No significant differences were noted in the ADC values, and type of TIC curve before and after treatment for AEH (P > 0.05).</p><p><strong>Conclusions: </strong>Endometrial thickness can be imaging markers for predicting complete remission of EC and AEH with medroxyprogesterone acetate treatment. ADC values and TIC curve types can be imaging markers for predicting complete remission of EC.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"210"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoxin Huang, Xiaoxiao Huang, Kui Wang, Haosheng Bai, Xiuxian Lu, Guanqiao Jin
{"title":"2.5D deep learning radiomics and clinical data for predicting occult lymph node metastasis in lung adenocarcinoma.","authors":"Xiaoxin Huang, Xiaoxiao Huang, Kui Wang, Haosheng Bai, Xiuxian Lu, Guanqiao Jin","doi":"10.1186/s12880-025-01759-1","DOIUrl":"https://doi.org/10.1186/s12880-025-01759-1","url":null,"abstract":"<p><strong>Background: </strong>Occult lymph node metastasis (OLNM) refers to lymph node involvement that remains undetectable by conventional imaging techniques, posing a significant challenge in the accurate staging of lung adenocarcinoma. This study aims to investigate the potential of combining 2.5D deep learning radiomics with clinical data to predict OLNM in lung adenocarcinoma.</p><p><strong>Methods: </strong>Retrospective contrast-enhanced CT images were collected from 1,099 patients diagnosed with lung adenocarcinoma across two centers. Multivariable analysis was performed to identify independent clinical risk factors for constructing clinical signatures. Radiomics features were extracted from the enhanced CT images to develop radiomics signatures. A 2.5D deep learning approach was used to extract deep learning features from the images, which were then aggregated using multi-instance learning (MIL) to construct MIL signatures. Deep learning radiomics (DLRad) signatures were developed by integrating the deep learning features with radiomic features. These were subsequently combined with clinical features to form the combined signatures. The performance of the resulting signatures was evaluated using the area under the curve (AUC).</p><p><strong>Results: </strong>The clinical model achieved AUCs of 0.903, 0.866, and 0.785 in the training, validation, and external test cohorts The radiomics model yielded AUCs of 0.865, 0.892, and 0.796 in the training, validation, and external test cohorts. The MIL model demonstrated AUCs of 0.903, 0.900, and 0.852 in the training, validation, and external test cohorts, respectively. The DLRad model showed AUCs of 0.910, 0.908, and 0.875 in the training, validation, and external test cohorts. Notably, the combined model consistently outperformed all other models, achieving AUCs of 0.940, 0.923, and 0.898 in the training, validation, and external test cohorts.</p><p><strong>Conclusion: </strong>The integration of 2.5D deep learning radiomics with clinical data demonstrates strong capability for OLNM in lung adenocarcinoma, potentially aiding clinicians in developing more personalized treatment strategies.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"225"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Dong, Yang Li, Lingli Zhao, Lekang Yin, Xiaojun Guan, Xiaodan Ye, Xiaojun Xu
{"title":"Consensus clustering based on CT radiomics has potential for risk stratification of patients with clinical T1 stage lung adenocarcinoma.","authors":"Hao Dong, Yang Li, Lingli Zhao, Lekang Yin, Xiaojun Guan, Xiaodan Ye, Xiaojun Xu","doi":"10.1186/s12880-025-01795-x","DOIUrl":"https://doi.org/10.1186/s12880-025-01795-x","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to clinically risk-classify patients with clinical stage T1 LUAD based on consensus clustering of CT radiomics to help clinics provide personalized treatment strategies for patients with early stage LUAD.</p><p><strong>Materials: </strong>Clinical, pathological and CT imaging data of patients who underwent surgical resection and pathologically confirmed lung adenocarcinoma from September 2018 to May 2021 were retrospectively analysed. The clinical and pathological information included age, gender, smoking history, tumor location, pathological subtype, infiltration level, lymph node metastasis (LNM), visceral pleural infiltration (VPI), lymphovascular invasion (LVI), spread through air space (STAS), Ki-67 proliferation index, and gene mutation information. Unsupervised consensus clustering analysis was performed based on the radiomic features of CT images to determine the optimal cluster values and evaluate the effect of consensus clustering. Patients were grouped according to the consensus clustering results, and compared with the histopathological characteristics of the tumors, genomic information and subgroup analyses were performed in invasive adenocarcinomas and sub-solid lesions.</p><p><strong>Results: </strong>Totally 497 cases were determined to be classified into 2 clusters (optimal), with 258 (51.9%) cases in cluster 1 and 239 (48.1%) cases in cluster 2. There were statistically significant differences between cluster 1 and cluster 2 in micropapillary component, solid component, STAS, and Ki-67 proliferation index (p < 0.001), as well as statistically significant differences in LNM and VPI (p = 0.031 and 0.012 respectively). Additionally, micropapillary component, solid component, STAS, and Ki-67 proliferation index were also statistically different in subgroup analyses of invasive adenocarcinomas and sub-solid foci (p < 0.05). The clusters 1 and 2 were statistically different only in HER2 mutations (p < 0.001).</p><p><strong>Conclusion: </strong>Consensus clustering based on CT radiomics can identify the associations of radiomic features between pathological risk factors and genomic features in clinical stage T1 lung adenocarcinoma, which can help clinical risk stratification of stage T1 lung adenocarcinoma patients.</p><p><strong>Clinical trial number: </strong>Not applicable.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"231"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144538235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}