{"title":"DeepGCNMIL:基于图卷积网络的全幻灯片图像生存分析的多头注意引导多实例学习方法","authors":"Fei Wu, Pei Liu, Bo Fu, Feng Ye","doi":"10.1145/3529836.3529942","DOIUrl":null,"url":null,"abstract":"∗Analyzing giga-pixel Whole-Slide Images (WSIs) has difficulty in expanding to large-scale data set due to labor intensive patchlevel annotation. Current multi-instance learning (MIL) frameworks guided by attention mechanism have successfully built the relation between giga-pixel WSI and survival, which is suitable for large-scale data analysis. However, the simple aggregation of patchlevel features may not comprehensively characterize WSI, since it ignores the internal connection between patches. To address this problem, this paper proposes a graph convolutional networks-based MIL framework, named as DeepGCNMIL. We firstly cluster patches into several phenotypes based on their similarity, then build graphs for these clusters to consider internal connections among patches through node edges and exploit a three-layer graph convolutional network (GCN) to learn representation of each phenotype. Moreover, we introduce multi-head attention to aggregate phenotype features into WSI representation for prognostic risk assessment. Our method achieves a C-index of 0.673 (± 0.053) on the NLST dataset (0.035 ahead of the second place) and 0.632 (± 0.065) on the TCGA_BRCA dataset (0.018 ahead of the second place), which show that for large-scale prognostic modeling of Giga-pixel digital pathological images, our method outperforms similar WSI survival prediction models. This novel MIL framework could be effectively utilized to assess the prognosis risk of individual patients and help provide personalized medicine.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"DeepGCNMIL: Multi-head Attention Guided Multi-Instance Learning Approach for Whole-Slide Images Survival Analysis Using Graph Convolutional Networks\",\"authors\":\"Fei Wu, Pei Liu, Bo Fu, Feng Ye\",\"doi\":\"10.1145/3529836.3529942\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"∗Analyzing giga-pixel Whole-Slide Images (WSIs) has difficulty in expanding to large-scale data set due to labor intensive patchlevel annotation. Current multi-instance learning (MIL) frameworks guided by attention mechanism have successfully built the relation between giga-pixel WSI and survival, which is suitable for large-scale data analysis. However, the simple aggregation of patchlevel features may not comprehensively characterize WSI, since it ignores the internal connection between patches. To address this problem, this paper proposes a graph convolutional networks-based MIL framework, named as DeepGCNMIL. We firstly cluster patches into several phenotypes based on their similarity, then build graphs for these clusters to consider internal connections among patches through node edges and exploit a three-layer graph convolutional network (GCN) to learn representation of each phenotype. Moreover, we introduce multi-head attention to aggregate phenotype features into WSI representation for prognostic risk assessment. Our method achieves a C-index of 0.673 (± 0.053) on the NLST dataset (0.035 ahead of the second place) and 0.632 (± 0.065) on the TCGA_BRCA dataset (0.018 ahead of the second place), which show that for large-scale prognostic modeling of Giga-pixel digital pathological images, our method outperforms similar WSI survival prediction models. This novel MIL framework could be effectively utilized to assess the prognosis risk of individual patients and help provide personalized medicine.\",\"PeriodicalId\":285191,\"journal\":{\"name\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3529836.3529942\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529942","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DeepGCNMIL: Multi-head Attention Guided Multi-Instance Learning Approach for Whole-Slide Images Survival Analysis Using Graph Convolutional Networks
∗Analyzing giga-pixel Whole-Slide Images (WSIs) has difficulty in expanding to large-scale data set due to labor intensive patchlevel annotation. Current multi-instance learning (MIL) frameworks guided by attention mechanism have successfully built the relation between giga-pixel WSI and survival, which is suitable for large-scale data analysis. However, the simple aggregation of patchlevel features may not comprehensively characterize WSI, since it ignores the internal connection between patches. To address this problem, this paper proposes a graph convolutional networks-based MIL framework, named as DeepGCNMIL. We firstly cluster patches into several phenotypes based on their similarity, then build graphs for these clusters to consider internal connections among patches through node edges and exploit a three-layer graph convolutional network (GCN) to learn representation of each phenotype. Moreover, we introduce multi-head attention to aggregate phenotype features into WSI representation for prognostic risk assessment. Our method achieves a C-index of 0.673 (± 0.053) on the NLST dataset (0.035 ahead of the second place) and 0.632 (± 0.065) on the TCGA_BRCA dataset (0.018 ahead of the second place), which show that for large-scale prognostic modeling of Giga-pixel digital pathological images, our method outperforms similar WSI survival prediction models. This novel MIL framework could be effectively utilized to assess the prognosis risk of individual patients and help provide personalized medicine.