Solitary Pulmonary Nodule malignancy classification utilising 3D features and semi-supervised Deep Learning

Ioannis D. Apostolopoulos, D. Apostolopoulos, G. Panayiotakis
{"title":"Solitary Pulmonary Nodule malignancy classification utilising 3D features and semi-supervised Deep Learning","authors":"Ioannis D. Apostolopoulos, D. Apostolopoulos, G. Panayiotakis","doi":"10.1109/IISA56318.2022.9904334","DOIUrl":null,"url":null,"abstract":"The volumetric representation of Solitary Pulmonary Nodules (SPN) in Computed Tomography (CT) imaging is mandatory, especially for capturing and analysing deep features and having a complete picture of the morphology, the shape of the volume, its distribution in space, and its relationship with the adjacent tissues. Automated deep feature extraction in three dimensional space is a specialisation area of 3D Convolutional Neural Networks (CNN). The extraction of the most representative features of malignant SPN representations, can be achieved with the assistance of CNNs. To evaluate this methodology, a 3D CNN called 3D-LidcNet, is developed in this study. The Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset is utilised to extract 2124 SPNs represented in sets of 2D slices. By concatenating 16 slices for each SPN, 3D nodule representations are constructed. To increase the learning capabilities of the 3D CNN, data augmentation is applied during training. 3D-LidcNet achieves 90.68% accuracy in distinguishing benign from malignant SPNs, coming from the strongly labelled subsets of the dataset (898 unique SPNs). To make full use of the weakly labelled SPNs, a semi-supervised training algorithm is utilised to progressively expand the training dataset with the most confident predictions of the weakly labelled SPNs. This approach succeeds in classifying 1585 SPNs, with an accuracy of 87.44%. Finally, 3D-LidcNet is trained and tested using the complete dataset (2124 SPNs) to distinguish between benign and malignant nodules, achieving an accuracy of 89.68%.","PeriodicalId":217519,"journal":{"name":"2022 13th International Conference on Information, Intelligence, Systems & Applications (IISA)","volume":"24 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 13th International Conference on Information, Intelligence, Systems & Applications (IISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA56318.2022.9904334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The volumetric representation of Solitary Pulmonary Nodules (SPN) in Computed Tomography (CT) imaging is mandatory, especially for capturing and analysing deep features and having a complete picture of the morphology, the shape of the volume, its distribution in space, and its relationship with the adjacent tissues. Automated deep feature extraction in three dimensional space is a specialisation area of 3D Convolutional Neural Networks (CNN). The extraction of the most representative features of malignant SPN representations, can be achieved with the assistance of CNNs. To evaluate this methodology, a 3D CNN called 3D-LidcNet, is developed in this study. The Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset is utilised to extract 2124 SPNs represented in sets of 2D slices. By concatenating 16 slices for each SPN, 3D nodule representations are constructed. To increase the learning capabilities of the 3D CNN, data augmentation is applied during training. 3D-LidcNet achieves 90.68% accuracy in distinguishing benign from malignant SPNs, coming from the strongly labelled subsets of the dataset (898 unique SPNs). To make full use of the weakly labelled SPNs, a semi-supervised training algorithm is utilised to progressively expand the training dataset with the most confident predictions of the weakly labelled SPNs. This approach succeeds in classifying 1585 SPNs, with an accuracy of 87.44%. Finally, 3D-LidcNet is trained and tested using the complete dataset (2124 SPNs) to distinguish between benign and malignant nodules, achieving an accuracy of 89.68%.
利用三维特征和半监督深度学习的孤立性肺结节恶性分类
孤立性肺结节(SPN)在计算机断层扫描(CT)成像中的体积表征是强制性的,特别是对于捕获和分析深层特征,以及对形态、体积形状、空间分布及其与邻近组织的关系有完整的了解。三维空间深度特征自动提取是三维卷积神经网络(CNN)的一个专门研究领域。在cnn的帮助下,可以提取出恶性SPN表示中最具代表性的特征。为了评估这种方法,本研究开发了一个名为3D- lidcnet的3D CNN。利用肺图像数据库联盟和图像数据库资源倡议(LIDC-IDRI)数据集提取2D切片集中表示的2124个spn。通过为每个SPN连接16个切片,构建了3D结节表示。为了提高3D CNN的学习能力,在训练过程中采用了数据增强的方法。3D-LidcNet在区分良性和恶性spn方面达到90.68%的准确率,来自数据集的强标记子集(898个唯一的spn)。为了充分利用弱标记spn,使用半监督训练算法逐步扩展训练数据集,并对弱标记spn进行最自信的预测。该方法成功分类了1585个spn,准确率为87.44%。最后,使用完整数据集(2124个spn)对3D-LidcNet进行训练和测试,以区分良恶性结节,准确率达到89.68%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信