A Novel Deep Learning Model Based Cancerous Lung Nodules Severity Grading Framework Using CT Images

IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
P. Mohan Kumar, V. E. Jayanthi
{"title":"A Novel Deep Learning Model Based Cancerous Lung Nodules Severity Grading Framework Using CT Images","authors":"P. Mohan Kumar,&nbsp;V. E. Jayanthi","doi":"10.1002/ima.70134","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Lung cancer remains one of the leading causes of cancer-related mortality, with early diagnosis being critical for improving patient survival rates. Existing deep learning models for lung nodule severity classification face significant challenges, including overfitting, computational inefficiency, and inaccurate segmentation of nodules from CT images. To overcome these limitations, this study proposes a novel deep learning framework integrating a Quadrangle Attention-based <i>U</i>-shaped Convolutional Transformer (QA-UCT) for segmentation and a Spatial Attention-based Multi-Scale Convolution Network (SMCN) for classification. CT images are enhanced using the Rotationally Invariant Block Matching-based Non-Local Means (RIB-NLM) filter to remove noise while preserving structural details. The QA-UCT model leverages transformer-based global attention mechanisms combined with convolutional layers to segment lung nodules with high precision. The SMCN classifier employs spatial attention mechanisms to categorize nodules as solid, part-solid, or non-solid based on severity. The proposed model was evaluated on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. This proposed model achieves a 98.73% dice score for segmentation and 99.56% classification accuracy, outperforming existing methods such as U-Net, VGG, and autoencoders. Improved precision and recall demonstrate superior performance in lung nodule grading. This study introduces a transformer-enhanced segmentation and spatial attention based classification framework that significantly improves lung nodule detection accuracy. The integration of QA-UCT and SMCN enhances both segmentation precision and classification reliability. Future research will explore adapting this framework for liver and kidney segmentation, as well as optimizing computational efficiency for real-time clinical deployment.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70134","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Lung cancer remains one of the leading causes of cancer-related mortality, with early diagnosis being critical for improving patient survival rates. Existing deep learning models for lung nodule severity classification face significant challenges, including overfitting, computational inefficiency, and inaccurate segmentation of nodules from CT images. To overcome these limitations, this study proposes a novel deep learning framework integrating a Quadrangle Attention-based U-shaped Convolutional Transformer (QA-UCT) for segmentation and a Spatial Attention-based Multi-Scale Convolution Network (SMCN) for classification. CT images are enhanced using the Rotationally Invariant Block Matching-based Non-Local Means (RIB-NLM) filter to remove noise while preserving structural details. The QA-UCT model leverages transformer-based global attention mechanisms combined with convolutional layers to segment lung nodules with high precision. The SMCN classifier employs spatial attention mechanisms to categorize nodules as solid, part-solid, or non-solid based on severity. The proposed model was evaluated on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. This proposed model achieves a 98.73% dice score for segmentation and 99.56% classification accuracy, outperforming existing methods such as U-Net, VGG, and autoencoders. Improved precision and recall demonstrate superior performance in lung nodule grading. This study introduces a transformer-enhanced segmentation and spatial attention based classification framework that significantly improves lung nodule detection accuracy. The integration of QA-UCT and SMCN enhances both segmentation precision and classification reliability. Future research will explore adapting this framework for liver and kidney segmentation, as well as optimizing computational efficiency for real-time clinical deployment.

基于CT图像的基于深度学习模型的肺癌结节严重程度分级框架
肺癌仍然是癌症相关死亡的主要原因之一,早期诊断对提高患者存活率至关重要。现有的肺结节严重程度分类深度学习模型面临着重大挑战,包括过拟合、计算效率低下以及CT图像中结节的不准确分割。为了克服这些限制,本研究提出了一种新的深度学习框架,该框架集成了基于四边形注意力的u形卷积变压器(QA-UCT)用于分割和基于空间注意力的多尺度卷积网络(SMCN)用于分类。使用基于旋转不变块匹配的非局部均值(RIB-NLM)滤波器增强CT图像,在保留结构细节的同时去除噪声。QA-UCT模型利用基于变压器的全局注意机制结合卷积层对肺结节进行高精度分割。SMCN分类器使用空间注意机制根据严重程度将结节分类为固体、部分固体或非固体。该模型在肺图像数据库联盟和图像数据库资源倡议(LIDC-IDRI)数据集上进行了评估。该模型实现了98.73%的分割分数和99.56%的分类准确率,优于现有的U-Net、VGG和自动编码器等方法。准确性和召回率的提高证明了肺结节分级的优越性。本研究引入了一种基于变压器增强分割和空间注意的分类框架,显著提高了肺结节检测的准确性。将QA-UCT与SMCN相结合,提高了分割精度和分类可靠性。未来的研究将探索将该框架用于肝脏和肾脏分割,以及优化实时临床部署的计算效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信