A Robust Semi-Supervised Brain Tumor MRI Classification Network for Data-Constrained Clinical Environments.

IF 3.3 3区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Subhash Chand Gupta, Vandana Bhattacharjee, Shripal Vijayvargiya, Partha Sarathi Bishnu, Raushan Oraon, Rajendra Majhi
{"title":"A Robust Semi-Supervised Brain Tumor MRI Classification Network for Data-Constrained Clinical Environments.","authors":"Subhash Chand Gupta, Vandana Bhattacharjee, Shripal Vijayvargiya, Partha Sarathi Bishnu, Raushan Oraon, Rajendra Majhi","doi":"10.3390/diagnostics15192485","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background:</b> The accurate classification of brain tumor subtypes from MRI scans is critical for timely diagnosis, yet the manual annotation of large datasets remains prohibitively labor-intensive. <b>Method:</b> We present SSPLNet (Semi-Supervised Pseudo-Labeling Network), a dual-branch deep learning framework that synergizes confidence-guided iterative pseudo-labelling with deep feature fusion to enable robust MRI-based tumor classification in data-constrained clinical environments. SSPLNet integrates a custom convolutional neural network (CNN) and a pretrained ResNet50 model, trained semi-supervised using adaptive confidence thresholds (τ = 0.98 → 0.95 → 0.90) to iteratively refine pseudo-labels for unlabelled MRI scans. Feature representations from both branches are fused via a dense network, combining localized texture patterns with hierarchical deep features. <b>Results:</b> SSPLNet achieves state-of-the-art accuracy across labelled-unlabelled data splits (90:10 to 10:90), outperforming supervised baselines in extreme low-label regimes (10:90) by up to 5.34% from Custom CNN and 5.58% from ResNet50. The framework reduces annotation dependence and with 40% unlabeled data maintains 98.17% diagnostic accuracy, demonstrating its viability for scalable deployment in resource-limited healthcare settings. <b>Conclusions:</b> Statistical Evaluation and Robustness Analysis of SSPLNet Performance confirms that SSPLNet's lower error rate is not due to chance. The bootstrap results also confirm that SSPLNet's reported accuracy falls well within the 95% CI of the sampling distribution.</p>","PeriodicalId":11225,"journal":{"name":"Diagnostics","volume":"15 19","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12524090/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diagnostics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/diagnostics15192485","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The accurate classification of brain tumor subtypes from MRI scans is critical for timely diagnosis, yet the manual annotation of large datasets remains prohibitively labor-intensive. Method: We present SSPLNet (Semi-Supervised Pseudo-Labeling Network), a dual-branch deep learning framework that synergizes confidence-guided iterative pseudo-labelling with deep feature fusion to enable robust MRI-based tumor classification in data-constrained clinical environments. SSPLNet integrates a custom convolutional neural network (CNN) and a pretrained ResNet50 model, trained semi-supervised using adaptive confidence thresholds (τ = 0.98 → 0.95 → 0.90) to iteratively refine pseudo-labels for unlabelled MRI scans. Feature representations from both branches are fused via a dense network, combining localized texture patterns with hierarchical deep features. Results: SSPLNet achieves state-of-the-art accuracy across labelled-unlabelled data splits (90:10 to 10:90), outperforming supervised baselines in extreme low-label regimes (10:90) by up to 5.34% from Custom CNN and 5.58% from ResNet50. The framework reduces annotation dependence and with 40% unlabeled data maintains 98.17% diagnostic accuracy, demonstrating its viability for scalable deployment in resource-limited healthcare settings. Conclusions: Statistical Evaluation and Robustness Analysis of SSPLNet Performance confirms that SSPLNet's lower error rate is not due to chance. The bootstrap results also confirm that SSPLNet's reported accuracy falls well within the 95% CI of the sampling distribution.

Abstract Image

Abstract Image

Abstract Image

数据受限的临床环境下稳健的半监督脑肿瘤MRI分类网络。
背景:从MRI扫描中准确分类脑肿瘤亚型对于及时诊断至关重要,然而对大型数据集的手动注释仍然是令人难以接受的劳动密集型。方法:我们提出了SSPLNet(半监督伪标记网络),这是一个双分支深度学习框架,它将置信度引导的迭代伪标记与深度特征融合协同起来,从而在数据受限的临床环境中实现基于mri的鲁棒肿瘤分类。SSPLNet集成了自定义卷积神经网络(CNN)和预训练的ResNet50模型,使用自适应置信阈值(τ = 0.98→0.95→0.90)训练半监督,迭代地改进未标记MRI扫描的伪标签。两个分支的特征表示通过密集网络融合,将局部纹理模式与层次深度特征相结合。结果:SSPLNet在标记-未标记数据分割(90:10至10:90)中实现了最先进的准确性,在极低标签方案(10:90)中优于监督基线,自定义CNN高达5.34%,ResNet50高达5.58%。该框架减少了对注释的依赖,并且在40%未标记数据的情况下保持了98.17%的诊断准确性,证明了其在资源有限的医疗保健环境中可扩展部署的可行性。结论:SSPLNet性能的统计评价和鲁棒性分析证实了SSPLNet较低的错误率并非偶然。bootstrap结果还证实,SSPLNet报告的准确性落在抽样分布的95% CI范围内。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Diagnostics
Diagnostics Biochemistry, Genetics and Molecular Biology-Clinical Biochemistry
CiteScore
4.70
自引率
8.30%
发文量
2699
审稿时长
19.64 days
期刊介绍: Diagnostics (ISSN 2075-4418) is an international scholarly open access journal on medical diagnostics. It publishes original research articles, reviews, communications and short notes on the research and development of medical diagnostics. There is no restriction on the length of the papers. Our aim is to encourage scientists to publish their experimental and theoretical research in as much detail as possible. Full experimental and/or methodological details must be provided for research articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信