S3TA改进的多实例CNN对未注释的组织病理图像进行结肠癌分类

Tiange Ye, Rushi Lan, Xiaonan Luo
{"title":"S3TA改进的多实例CNN对未注释的组织病理图像进行结肠癌分类","authors":"Tiange Ye, Rushi Lan, Xiaonan Luo","doi":"10.1109/ICICIP53388.2021.9642206","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a new method for colon cancer classification from histopathological images, which can automatically analyze a given whole slide image (WSI). We usually classify cancer classification by referring a WSI, which is typically 20000 × 20000 pixels. The cost of obtaining WSIs with annotating cancer regions is very high. Multiple-instance learning (MIL) is a variant of supervised learning in which the instances in a bag share a single class label. That is, MIL only needs unannotated WSI. In recent years, MIL has developed a hard attention mechanism which has achieved good performance. However, this hard attention mechanism cannot notice the interior of each patch, i.e., it lacks soft attention mechanism. In this paper, a soft, sequential, spatial, top-down attention mechanism (which we abbreviate as S3TA) is used to make up for the lack of MIL attention mechanism. Finally, our experiments show that by varying the number of attention steps in S3TA, we achieved a better accuracy of 93.6% than the old model.","PeriodicalId":435799,"journal":{"name":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Multiple-instance CNN Improved by S3TA for Colon Cancer Classification with Unannotated Histopathological Images\",\"authors\":\"Tiange Ye, Rushi Lan, Xiaonan Luo\",\"doi\":\"10.1109/ICICIP53388.2021.9642206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a new method for colon cancer classification from histopathological images, which can automatically analyze a given whole slide image (WSI). We usually classify cancer classification by referring a WSI, which is typically 20000 × 20000 pixels. The cost of obtaining WSIs with annotating cancer regions is very high. Multiple-instance learning (MIL) is a variant of supervised learning in which the instances in a bag share a single class label. That is, MIL only needs unannotated WSI. In recent years, MIL has developed a hard attention mechanism which has achieved good performance. However, this hard attention mechanism cannot notice the interior of each patch, i.e., it lacks soft attention mechanism. In this paper, a soft, sequential, spatial, top-down attention mechanism (which we abbreviate as S3TA) is used to make up for the lack of MIL attention mechanism. Finally, our experiments show that by varying the number of attention steps in S3TA, we achieved a better accuracy of 93.6% than the old model.\",\"PeriodicalId\":435799,\"journal\":{\"name\":\"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICIP53388.2021.9642206\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICIP53388.2021.9642206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

本文提出了一种基于组织病理图像的结肠癌分类新方法,该方法可以自动分析给定的整张幻灯片图像(WSI)。我们通常通过参考WSI来进行癌症分类,WSI一般为20000 × 20000像素。获得带有癌症区域注释的wsi的成本非常高。多实例学习(MIL)是监督学习的一种变体,其中包中的实例共享单个类标签。也就是说,MIL只需要未注释的WSI。近年来,MIL发展了一种硬注意机制,并取得了良好的效果。然而,这种硬注意机制无法注意到每个斑块的内部,即缺乏软注意机制。本文采用了一种软的、顺序的、空间的、自上而下的注意机制(简称S3TA)来弥补MIL注意机制的不足。最后,我们的实验表明,通过改变S3TA中注意步骤的数量,我们获得了比旧模型更好的93.6%的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multiple-instance CNN Improved by S3TA for Colon Cancer Classification with Unannotated Histopathological Images
In this paper, we propose a new method for colon cancer classification from histopathological images, which can automatically analyze a given whole slide image (WSI). We usually classify cancer classification by referring a WSI, which is typically 20000 × 20000 pixels. The cost of obtaining WSIs with annotating cancer regions is very high. Multiple-instance learning (MIL) is a variant of supervised learning in which the instances in a bag share a single class label. That is, MIL only needs unannotated WSI. In recent years, MIL has developed a hard attention mechanism which has achieved good performance. However, this hard attention mechanism cannot notice the interior of each patch, i.e., it lacks soft attention mechanism. In this paper, a soft, sequential, spatial, top-down attention mechanism (which we abbreviate as S3TA) is used to make up for the lack of MIL attention mechanism. Finally, our experiments show that by varying the number of attention steps in S3TA, we achieved a better accuracy of 93.6% than the old model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信