{"title":"利用自我监督等变注意定位胸片上的肺部异常。","authors":"Gavin D'Souza, N V Subba Reddy, K N Manjunath","doi":"10.1007/s13534-022-00249-5","DOIUrl":null,"url":null,"abstract":"<p><p>Chest X-Ray (CXR) images provide most anatomical details and the abnormalities on a 2D plane. Therefore, a 2D view of the 3D anatomy is sometimes sufficient for the initial diagnosis. However, close to fourteen commonly occurring diseases are sometimes difficult to identify by visually inspecting the images. Therefore, there is a drift toward developing computer-aided assistive systems to help radiologists. This paper proposes a deep learning model for the classification and localization of chest diseases by using image-level annotations. The model consists of a modified Resnet50 backbone for extracting feature corpus from the images, a classifier, and a pixel correlation module (PCM). During PCM training, the network is a weight-shared siamese architecture where the first branch applies the affine transform to the image before feeding to the network, while the second applies the same transform to the network output. The method was evaluated on CXR from the clinical center in the ratio of 70:20 for training and testing. The model was developed and tested using the cloud computing platform Google Colaboratory (NVidia Tesla P100 GPU, 16 GB of RAM). A radiologist subjectively validated the results. Our model trained with the configurations mentioned in this paper outperformed benchmark results.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s13534-022-00249-5.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"13 1","pages":"21-30"},"PeriodicalIF":3.2000,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9873849/pdf/","citationCount":"1","resultStr":"{\"title\":\"Localization of lung abnormalities on chest X-rays using self-supervised equivariant attention.\",\"authors\":\"Gavin D'Souza, N V Subba Reddy, K N Manjunath\",\"doi\":\"10.1007/s13534-022-00249-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Chest X-Ray (CXR) images provide most anatomical details and the abnormalities on a 2D plane. Therefore, a 2D view of the 3D anatomy is sometimes sufficient for the initial diagnosis. However, close to fourteen commonly occurring diseases are sometimes difficult to identify by visually inspecting the images. Therefore, there is a drift toward developing computer-aided assistive systems to help radiologists. This paper proposes a deep learning model for the classification and localization of chest diseases by using image-level annotations. The model consists of a modified Resnet50 backbone for extracting feature corpus from the images, a classifier, and a pixel correlation module (PCM). During PCM training, the network is a weight-shared siamese architecture where the first branch applies the affine transform to the image before feeding to the network, while the second applies the same transform to the network output. The method was evaluated on CXR from the clinical center in the ratio of 70:20 for training and testing. The model was developed and tested using the cloud computing platform Google Colaboratory (NVidia Tesla P100 GPU, 16 GB of RAM). A radiologist subjectively validated the results. Our model trained with the configurations mentioned in this paper outperformed benchmark results.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s13534-022-00249-5.</p>\",\"PeriodicalId\":46898,\"journal\":{\"name\":\"Biomedical Engineering Letters\",\"volume\":\"13 1\",\"pages\":\"21-30\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9873849/pdf/\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Engineering Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s13534-022-00249-5\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-022-00249-5","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 1
摘要
胸部x线(CXR)图像在二维平面上提供大多数解剖细节和异常。因此,三维解剖的二维视图有时足以进行初步诊断。然而,近14种常见病有时难以通过视觉检查图像来识别。因此,有一种倾向是开发计算机辅助系统来帮助放射科医生。本文提出了一种基于图像级标注的胸部疾病分类与定位的深度学习模型。该模型由用于从图像中提取特征语料库的改进的Resnet50主干、分类器和像素相关模块(PCM)组成。在PCM训练期间,网络是一个权重共享的暹罗体系结构,其中第一个分支在将图像馈送到网络之前对图像进行仿射变换,而第二个分支对网络输出应用相同的变换。该方法在临床中心的CXR中以70:20的比例进行评估,用于培训和测试。该模型使用云计算平台Google协作实验室(NVidia Tesla P100 GPU, 16gb RAM)进行开发和测试。放射科医生主观地验证了结果。使用本文中提到的配置进行训练的模型优于基准测试结果。补充信息:在线版本包含补充资料,下载地址:10.1007/s13534-022-00249-5。
Localization of lung abnormalities on chest X-rays using self-supervised equivariant attention.
Chest X-Ray (CXR) images provide most anatomical details and the abnormalities on a 2D plane. Therefore, a 2D view of the 3D anatomy is sometimes sufficient for the initial diagnosis. However, close to fourteen commonly occurring diseases are sometimes difficult to identify by visually inspecting the images. Therefore, there is a drift toward developing computer-aided assistive systems to help radiologists. This paper proposes a deep learning model for the classification and localization of chest diseases by using image-level annotations. The model consists of a modified Resnet50 backbone for extracting feature corpus from the images, a classifier, and a pixel correlation module (PCM). During PCM training, the network is a weight-shared siamese architecture where the first branch applies the affine transform to the image before feeding to the network, while the second applies the same transform to the network output. The method was evaluated on CXR from the clinical center in the ratio of 70:20 for training and testing. The model was developed and tested using the cloud computing platform Google Colaboratory (NVidia Tesla P100 GPU, 16 GB of RAM). A radiologist subjectively validated the results. Our model trained with the configurations mentioned in this paper outperformed benchmark results.
Supplementary information: The online version contains supplementary material available at 10.1007/s13534-022-00249-5.
期刊介绍:
Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.