Multimodal Deep Learning Network for Differentiating Between Benign and Malignant Pulmonary Ground Glass Nodules.

IF 1.1 4区 医学 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Gang Liu, Fei Liu, Xu Mao, Xiaoting Xie, Jingyao Sang, Husai Ma, Haiyun Yang, Hui He
{"title":"Multimodal Deep Learning Network for Differentiating Between Benign and Malignant Pulmonary Ground Glass Nodules.","authors":"Gang Liu, Fei Liu, Xu Mao, Xiaoting Xie, Jingyao Sang, Husai Ma, Haiyun Yang, Hui He","doi":"10.2174/0115734056301741240903072017","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to establish a multimodal deep-learning network model to enhance the diagnosis of benign and malignant pulmonary ground glass nodules [GGNs].</p><p><strong>Methods: </strong>Retrospective data on pulmonary GGNs were collected from multiple centers across China, including North, Northeast, Northwest, South, and Southwest China. The data were divided into a training set and a validation set in an 8:2 ratio. In addition, a GGN dataset was also obtained from our hospital database and used as the test set. All patients underwent chest computed tomography [CT], and the final diagnosis of the nodules was based on postoperative pathological reports. The Residual Network [ResNet] was used to extract imaging data, the Word2Vec method for semantic information extraction, and the Self Attention method for combining imaging features and patient data to construct a multimodal classification model. Then, the diagnostic efficiency of the proposed multimodal model was compared with that of existing ResNet and VGG models and radiologists.</p><p><strong>Results: </strong>The multicenter dataset comprised 1020 GGNs, including 265 benign and 755 malignant nodules, and the test dataset comprised 204 GGNs, with 67 benign and 137 malignant nodules. In the validation set, the proposed multimodal model achieved an accuracy of 90.2%, a sensitivity of 96.6%, and a specificity of 75.0%, which surpassed that of the VGG [73.1%, 76.7%, and 66.5%] and ResNet [78.0%, 83.3%, and 65.8%] models in diagnosing benign and malignant nodules. In the test set, the multimodal model accurately diagnosed 125 [91.18%] malignant nodules, outperforming radiologists [80.37% accuracy]. Moreover, the multimodal model correctly identified 54 [accuracy, 80.70%] benign nodules, compared to radiologists' accuracy of 85.47%. The consistency test comparing radiologists' diagnostic results with the multimodal model's results in relation to postoperative pathology showed strong agreement, with the multimodal model demonstrating closer alignment with gold standard pathological findings [Kappa=0.720, P<0.01].</p><p><strong>Conclusion: </strong>The multimodal deep learning network model exhibited promising diagnostic effectiveness in distinguishing benign and malignant GGNs and, therefore, holds potential as a reference tool to assist radiologists in improving the diagnostic accuracy of GGNs, potentially enhancing their work efficiency in clinical settings.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Medical Imaging Reviews","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2174/0115734056301741240903072017","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: This study aimed to establish a multimodal deep-learning network model to enhance the diagnosis of benign and malignant pulmonary ground glass nodules [GGNs].

Methods: Retrospective data on pulmonary GGNs were collected from multiple centers across China, including North, Northeast, Northwest, South, and Southwest China. The data were divided into a training set and a validation set in an 8:2 ratio. In addition, a GGN dataset was also obtained from our hospital database and used as the test set. All patients underwent chest computed tomography [CT], and the final diagnosis of the nodules was based on postoperative pathological reports. The Residual Network [ResNet] was used to extract imaging data, the Word2Vec method for semantic information extraction, and the Self Attention method for combining imaging features and patient data to construct a multimodal classification model. Then, the diagnostic efficiency of the proposed multimodal model was compared with that of existing ResNet and VGG models and radiologists.

Results: The multicenter dataset comprised 1020 GGNs, including 265 benign and 755 malignant nodules, and the test dataset comprised 204 GGNs, with 67 benign and 137 malignant nodules. In the validation set, the proposed multimodal model achieved an accuracy of 90.2%, a sensitivity of 96.6%, and a specificity of 75.0%, which surpassed that of the VGG [73.1%, 76.7%, and 66.5%] and ResNet [78.0%, 83.3%, and 65.8%] models in diagnosing benign and malignant nodules. In the test set, the multimodal model accurately diagnosed 125 [91.18%] malignant nodules, outperforming radiologists [80.37% accuracy]. Moreover, the multimodal model correctly identified 54 [accuracy, 80.70%] benign nodules, compared to radiologists' accuracy of 85.47%. The consistency test comparing radiologists' diagnostic results with the multimodal model's results in relation to postoperative pathology showed strong agreement, with the multimodal model demonstrating closer alignment with gold standard pathological findings [Kappa=0.720, P<0.01].

Conclusion: The multimodal deep learning network model exhibited promising diagnostic effectiveness in distinguishing benign and malignant GGNs and, therefore, holds potential as a reference tool to assist radiologists in improving the diagnostic accuracy of GGNs, potentially enhancing their work efficiency in clinical settings.

区分良性和恶性肺磨玻璃结节的多模态深度学习网络
研究目的本研究旨在建立一个多模态深度学习网络模型,以提高肺磨玻璃结节(GGNs)良恶性诊断水平:方法: 研究人员从华北、东北、西北、华南和西南等全国多个中心收集了肺磨玻璃结节的回顾性数据。数据按 8:2 的比例分为训练集和验证集。此外,我们还从医院数据库中获取了一个 GGN 数据集作为测试集。所有患者都接受了胸部计算机断层扫描(CT),结节的最终诊断基于术后病理报告。利用残差网络(ResNet)提取影像数据,利用 Word2Vec 方法提取语义信息,利用自我关注方法将影像特征和患者数据结合起来,构建多模态分类模型。然后,将所提出的多模态模型的诊断效率与现有的 ResNet 和 VGG 模型以及放射科医生的诊断效率进行了比较:多中心数据集包括 1020 个 GGN,其中良性结节 265 个,恶性结节 755 个;测试数据集包括 204 个 GGN,其中良性结节 67 个,恶性结节 137 个。在验证集中,所提出的多模态模型在诊断良性和恶性结节方面的准确率为 90.2%,灵敏度为 96.6%,特异性为 75.0%,超过了 VGG 模型[73.1%、76.7% 和 66.5%]和 ResNet 模型[78.0%、83.3% 和 65.8%]。在测试集中,多模态模型准确诊断出 125 个[91.18%]恶性结节,准确率超过放射科医生[80.37%]。此外,多模态模型正确识别了 54 个[准确率 80.70%]良性结节,而放射科医生的准确率为 85.47%。将放射科医生的诊断结果与多模态模型的结果与术后病理结果进行一致性测试,结果显示两者非常吻合,多模态模型与金标准病理结果的吻合度更高[Kappa=0.720,PC结论:多模态深度学习网络模型在区分良性和恶性 GGN 方面表现出了良好的诊断效果,因此有可能成为协助放射科医生提高 GGN 诊断准确性的参考工具,从而提高他们在临床中的工作效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.60
自引率
0.00%
发文量
246
审稿时长
1 months
期刊介绍: Current Medical Imaging Reviews publishes frontier review articles, original research articles, drug clinical trial studies and guest edited thematic issues on all the latest advances on medical imaging dedicated to clinical research. All relevant areas are covered by the journal, including advances in the diagnosis, instrumentation and therapeutic applications related to all modern medical imaging techniques. The journal is essential reading for all clinicians and researchers involved in medical imaging and diagnosis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信