TurkerNeXtV2: An Innovative CNN Model for Knee Osteoarthritis Pressure Image Classification.

IF 3.3 3区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Omer Esmez, Gulnihal Deniz, Furkan Bilek, Murat Gurger, Prabal Datta Barua, Sengul Dogan, Mehmet Baygin, Turker Tuncer
{"title":"TurkerNeXtV2: An Innovative CNN Model for Knee Osteoarthritis Pressure Image Classification.","authors":"Omer Esmez, Gulnihal Deniz, Furkan Bilek, Murat Gurger, Prabal Datta Barua, Sengul Dogan, Mehmet Baygin, Turker Tuncer","doi":"10.3390/diagnostics15192478","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background/Objectives:</b> Lightweight CNNs for medical imaging remain limited. We propose TurkerNeXtV2, a compact CNN that introduces two new blocks: a pooling-based attention with an inverted bottleneck (TNV2) and a hybrid downsampling module. These blocks improve stability and efficiency. The aim is to achieve transformer-level effectiveness while keeping the simplicity, low computational cost, and deployability of CNNs. <b>Methods:</b> The model was first pretrained on the Stable ImageNet-1k benchmark and then fine-tuned on a collected plantar-pressure OA dataset. We also evaluated the model on a public blood-cell image dataset. Performance was measured by accuracy, precision, recall, and F1-score. Inference time (images per second) was recorded on an RTX 5080 GPU. Grad-CAM was used for qualitative explainability. <b>Results:</b> During pretraining on Stable ImageNet-1k, the model reached a validation accuracy of 87.77%. On the OA test set, the model achieved 93.40% accuracy (95% CI: 91.3-95.2%) with balanced precision and recall above 90%. On the blood-cell dataset, the test accuracy was 98.52%. The average inference time was 0.0078 s per image (≈128.8 images/s), which is comparable to strong CNN baselines and faster than the transformer baselines tested under the same settings. <b>Conclusions:</b> TurkerNeXtV2 delivers high accuracy with low computational cost. The pooling-based attention (TNV2) and the hybrid downsampling enable a lightweight yet effective design. The model is suitable for real-time and clinical use. Future work will include multi-center validation and broader tests across imaging modalities.</p>","PeriodicalId":11225,"journal":{"name":"Diagnostics","volume":"15 19","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12523376/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diagnostics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/diagnostics15192478","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background/Objectives: Lightweight CNNs for medical imaging remain limited. We propose TurkerNeXtV2, a compact CNN that introduces two new blocks: a pooling-based attention with an inverted bottleneck (TNV2) and a hybrid downsampling module. These blocks improve stability and efficiency. The aim is to achieve transformer-level effectiveness while keeping the simplicity, low computational cost, and deployability of CNNs. Methods: The model was first pretrained on the Stable ImageNet-1k benchmark and then fine-tuned on a collected plantar-pressure OA dataset. We also evaluated the model on a public blood-cell image dataset. Performance was measured by accuracy, precision, recall, and F1-score. Inference time (images per second) was recorded on an RTX 5080 GPU. Grad-CAM was used for qualitative explainability. Results: During pretraining on Stable ImageNet-1k, the model reached a validation accuracy of 87.77%. On the OA test set, the model achieved 93.40% accuracy (95% CI: 91.3-95.2%) with balanced precision and recall above 90%. On the blood-cell dataset, the test accuracy was 98.52%. The average inference time was 0.0078 s per image (≈128.8 images/s), which is comparable to strong CNN baselines and faster than the transformer baselines tested under the same settings. Conclusions: TurkerNeXtV2 delivers high accuracy with low computational cost. The pooling-based attention (TNV2) and the hybrid downsampling enable a lightweight yet effective design. The model is suitable for real-time and clinical use. Future work will include multi-center validation and broader tests across imaging modalities.

TurkerNeXtV2:一种用于膝关节骨关节炎压力图像分类的创新CNN模型。
背景/目的:用于医学成像的轻量级cnn仍然有限。我们提出了TurkerNeXtV2,这是一个紧凑的CNN,它引入了两个新的块:一个基于池的关注与倒瓶颈(TNV2)和一个混合下采样模块。这些模块提高了稳定性和效率。目标是在保持cnn的简单性、低计算成本和可部署性的同时实现变压器级的有效性。方法:首先在Stable ImageNet-1k基准上对模型进行预训练,然后在收集的植物压力OA数据集上对模型进行微调。我们还在一个公开的血细胞图像数据集上评估了该模型。性能通过准确性、精密度、召回率和f1分数来衡量。在RTX 5080 GPU上记录推理时间(每秒图像数)。Grad-CAM用于定性可解释性。结果:在Stable ImageNet-1k上进行预训练时,模型的验证准确率达到87.77%。在OA测试集上,该模型的准确率达到93.40% (95% CI: 91.3 ~ 95.2%),平衡精度和召回率均在90%以上。在血细胞数据集上,测试准确率为98.52%。平均推理时间为每幅图像0.0078 s(≈128.8图像/s),与强CNN基线相当,比相同设置下测试的变压器基线更快。结论:TurkerNeXtV2具有较高的准确性和较低的计算成本。基于池的注意力(TNV2)和混合下采样实现了轻量级但有效的设计。该模型适合于实时和临床应用。未来的工作将包括多中心验证和跨成像模式的更广泛的测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Diagnostics
Diagnostics Biochemistry, Genetics and Molecular Biology-Clinical Biochemistry
CiteScore
4.70
自引率
8.30%
发文量
2699
审稿时长
19.64 days
期刊介绍: Diagnostics (ISSN 2075-4418) is an international scholarly open access journal on medical diagnostics. It publishes original research articles, reviews, communications and short notes on the research and development of medical diagnostics. There is no restriction on the length of the papers. Our aim is to encourage scientists to publish their experimental and theoretical research in as much detail as possible. Full experimental and/or methodological details must be provided for research articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信