Huilin Xu , Aoshen Wu , He Ren , Chenghang Yu , Gang Liu , Lei Liu
{"title":"基于注意力的多实例学习网络在整张幻灯片图像上的结直肠癌一致分子亚型分类","authors":"Huilin Xu , Aoshen Wu , He Ren , Chenghang Yu , Gang Liu , Lei Liu","doi":"10.1016/j.acthis.2023.152057","DOIUrl":null,"url":null,"abstract":"<div><p>Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.</p></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Classification of colorectal cancer consensus molecular subtypes using attention-based multi-instance learning network on whole-slide images\",\"authors\":\"Huilin Xu , Aoshen Wu , He Ren , Chenghang Yu , Gang Liu , Lei Liu\",\"doi\":\"10.1016/j.acthis.2023.152057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.</p></div>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0065128123000636\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0065128123000636","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
Classification of colorectal cancer consensus molecular subtypes using attention-based multi-instance learning network on whole-slide images
Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.