Huadi Zhou, Mei Xie, Hemiao Shi, Chenhan Shou, Meng Tang, Yue Zhang, Yue Hu, Xiao Liu
{"title":"Integrating multimodal imaging and peritumoral features for enhanced prostate cancer diagnosis: A machine learning approach.","authors":"Huadi Zhou, Mei Xie, Hemiao Shi, Chenhan Shou, Meng Tang, Yue Zhang, Yue Hu, Xiao Liu","doi":"10.1371/journal.pone.0323752","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge.</p><p><strong>Methods: </strong>This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2).</p><p><strong>Results: </strong>The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions.</p><p><strong>Conclusion: </strong>The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 5","pages":"e0323752"},"PeriodicalIF":2.9000,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12080843/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0323752","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge.
Methods: This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2).
Results: The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions.
Conclusion: The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.
背景:前列腺癌是男性常见的恶性肿瘤,早期准确区分良恶性结节对于优化治疗至关重要。多模态成像(如ADC和T2)在前列腺癌的诊断中发挥着重要作用,但有效结合这些成像特征进行准确分类仍然是一个挑战。方法:回顾性研究199例前列腺癌患者的MRI资料。从肿瘤和肿瘤周围区域提取放射学特征,并使用随机森林模型选择最有贡献的特征进行分类。然后构建三个机器学习模型-随机森林,XGBoost和Extra trees -并在四种不同的特征组合(肿瘤ADC,肿瘤T2,肿瘤ADC+T2和肿瘤+肿瘤周围ADC+T2)上进行训练。结果:结合多模态影像特征和肿瘤周围特征的模型具有较好的分类性能。Extra Trees模型在所有特征组合中表现优于其他模型,特别是在肿瘤+肿瘤周围ADC+T2组中,AUC达到0.729。其他组合的AUC值也大于0.65。虽然Random Forest和XGBoost模型的表现略低,但它们仍然表现出很强的分类能力,auc范围在0.63到0.72之间。SHAP分析显示,关键特征,如肿瘤纹理和肿瘤周围的灰度特征,对模型的分类决策有重要影响。结论:多模态影像资料与肿瘤周围特征相结合可适度提高前列腺癌分型的准确性。该模型为临床使用提供了一种非侵入性和有效的诊断工具,并支持未来的个性化治疗决策。
期刊介绍:
PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides:
* Open-access—freely accessible online, authors retain copyright
* Fast publication times
* Peer review by expert, practicing researchers
* Post-publication tools to indicate quality and impact
* Community-based dialogue on articles
* Worldwide media coverage