{"title":"以磨玻璃结节为表现的浸润性肺腺癌分类的深度迁移学习策略","authors":"Chen Ma, Shihong Yue, Qi Li","doi":"10.1109/I2MTC50364.2021.9459841","DOIUrl":null,"url":null,"abstract":"Lung cancer is one of the deadliest diseases in which adenocarcinoma account for nearly 40%. To make an effective treatment and diagnosis, it is vital to accurately discriminate invasive adenocarcinoma (IA) from non-IA by analyzing ground glass nodules (GGNs) from patient's CT images. Compared with solid nodules and normal lung parenchyma, the contours of GGN are blurred and the gray scale is little changed. So far, the problem to accurately discriminate IA and non-IA remains unsolved due to insufficient labeled GGN images. In this paper, considering the generalization of convolutional neural network (CNN) and various flexible transfer strategies, we proposed a lung adenocarcinoma classification method after combining transfer learning and CNN, where the use of transfer learning strategies aims at overcoming the problem of insufficient GGN samples. Firstly, the CT image on IA and non-IA patients were collected which were labeled by surgical pathology. Secondly, two transfer learning strategies that consist of CNN feature extractor and fine-tuning network were applied to classify IA and non-IA. Finally, in the fine-tuning network process, a Progressive Fine-Tuning (PFT) strategy was combined to determine the effective depth of fine-tuning to avoid inaccurate induction of GGNs. In the CNN feature extractor experiment, four comparable models were used including linear discrimination, Support Vector Machines, K-nearest neighbor, and subspace discrimination. The indicators of sensitivity, specificity, accuracy, and AUC (area under curve) were used to quantitatively assess the performance of the two transfer strategies. Experiments show that the strategy of CNN feature extractor based on transfer learning had the highest accuracy, which was significantly higher than fine-tuning network strategy with PFT. In the experiment of CNN feature extractor, the model of linear discrimination to predict the invasiveness of GGNs has 94% accuracy whereas the other three models have 92.9%, 93.1% and 92.9%, respectively.","PeriodicalId":6772,"journal":{"name":"2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)","volume":"28 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Deep Transfer Learning Strategy for Invasive Lung Adenocarcinoma Classification Appearing as Ground Glass Nodules\",\"authors\":\"Chen Ma, Shihong Yue, Qi Li\",\"doi\":\"10.1109/I2MTC50364.2021.9459841\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lung cancer is one of the deadliest diseases in which adenocarcinoma account for nearly 40%. To make an effective treatment and diagnosis, it is vital to accurately discriminate invasive adenocarcinoma (IA) from non-IA by analyzing ground glass nodules (GGNs) from patient's CT images. Compared with solid nodules and normal lung parenchyma, the contours of GGN are blurred and the gray scale is little changed. So far, the problem to accurately discriminate IA and non-IA remains unsolved due to insufficient labeled GGN images. In this paper, considering the generalization of convolutional neural network (CNN) and various flexible transfer strategies, we proposed a lung adenocarcinoma classification method after combining transfer learning and CNN, where the use of transfer learning strategies aims at overcoming the problem of insufficient GGN samples. Firstly, the CT image on IA and non-IA patients were collected which were labeled by surgical pathology. Secondly, two transfer learning strategies that consist of CNN feature extractor and fine-tuning network were applied to classify IA and non-IA. Finally, in the fine-tuning network process, a Progressive Fine-Tuning (PFT) strategy was combined to determine the effective depth of fine-tuning to avoid inaccurate induction of GGNs. In the CNN feature extractor experiment, four comparable models were used including linear discrimination, Support Vector Machines, K-nearest neighbor, and subspace discrimination. The indicators of sensitivity, specificity, accuracy, and AUC (area under curve) were used to quantitatively assess the performance of the two transfer strategies. Experiments show that the strategy of CNN feature extractor based on transfer learning had the highest accuracy, which was significantly higher than fine-tuning network strategy with PFT. In the experiment of CNN feature extractor, the model of linear discrimination to predict the invasiveness of GGNs has 94% accuracy whereas the other three models have 92.9%, 93.1% and 92.9%, respectively.\",\"PeriodicalId\":6772,\"journal\":{\"name\":\"2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)\",\"volume\":\"28 1\",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/I2MTC50364.2021.9459841\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/I2MTC50364.2021.9459841","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Transfer Learning Strategy for Invasive Lung Adenocarcinoma Classification Appearing as Ground Glass Nodules
Lung cancer is one of the deadliest diseases in which adenocarcinoma account for nearly 40%. To make an effective treatment and diagnosis, it is vital to accurately discriminate invasive adenocarcinoma (IA) from non-IA by analyzing ground glass nodules (GGNs) from patient's CT images. Compared with solid nodules and normal lung parenchyma, the contours of GGN are blurred and the gray scale is little changed. So far, the problem to accurately discriminate IA and non-IA remains unsolved due to insufficient labeled GGN images. In this paper, considering the generalization of convolutional neural network (CNN) and various flexible transfer strategies, we proposed a lung adenocarcinoma classification method after combining transfer learning and CNN, where the use of transfer learning strategies aims at overcoming the problem of insufficient GGN samples. Firstly, the CT image on IA and non-IA patients were collected which were labeled by surgical pathology. Secondly, two transfer learning strategies that consist of CNN feature extractor and fine-tuning network were applied to classify IA and non-IA. Finally, in the fine-tuning network process, a Progressive Fine-Tuning (PFT) strategy was combined to determine the effective depth of fine-tuning to avoid inaccurate induction of GGNs. In the CNN feature extractor experiment, four comparable models were used including linear discrimination, Support Vector Machines, K-nearest neighbor, and subspace discrimination. The indicators of sensitivity, specificity, accuracy, and AUC (area under curve) were used to quantitatively assess the performance of the two transfer strategies. Experiments show that the strategy of CNN feature extractor based on transfer learning had the highest accuracy, which was significantly higher than fine-tuning network strategy with PFT. In the experiment of CNN feature extractor, the model of linear discrimination to predict the invasiveness of GGNs has 94% accuracy whereas the other three models have 92.9%, 93.1% and 92.9%, respectively.