Deep Fully Convolutional Networks for Mitosis Detection

Mohammed B. Abdulkareem, Md Samiul Islam, A. Aljoubory, Zhou Nuoya
{"title":"Deep Fully Convolutional Networks for Mitosis Detection","authors":"Mohammed B. Abdulkareem, Md Samiul Islam, A. Aljoubory, Zhou Nuoya","doi":"10.1145/3351180.3351213","DOIUrl":null,"url":null,"abstract":"Image recognition plays a vital role in the medical image analysis field, which depends on different medical image analysis algorithms with input data, features, parameters, and type of learning. Three crucial morphological features on Hematoxylin and Eosin 1991 (H&E) related to the classifying the diseases for breast cancer, mitosis count, tubule formation and nuclear pleomorphic. Mitosis counts plays an essential role and an important diagnostic factor for breast cancer grading. Mitosis detection is still a challenging problem because the cells are part of the cell cycle to generate a new nuclear and with different stages of mitosis. However, we implemented a residual learning algorithm for optimization and easiest training; our model is ResNet18 pre-trained to classify with localized based on the Tensorflow framework (TF-DFCNN). Moreover, it is used for avoiding the degradation problem consisted of the normalization function, data augmentation and sampling method to get high accuracy detection. Our deep fully convolutional network (DFCNN) consists of two-stage, where the first stage is used for classification of MITOS-ATYPIA 2014 dataset, which achieves 85% accuracy. In the second stage, we add a new layer to detect the localization depends on Weakly-Supervised Object Localization Concept via a class activation map (CAM) technique for identifying discriminative regions to retrain our CNN model without fully connected layer by combining the framework with localized layer lead the model to be more complex and precise about 93% accuracy.","PeriodicalId":375806,"journal":{"name":"Proceedings of the 2019 4th International Conference on Robotics, Control and Automation","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 4th International Conference on Robotics, Control and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351180.3351213","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Image recognition plays a vital role in the medical image analysis field, which depends on different medical image analysis algorithms with input data, features, parameters, and type of learning. Three crucial morphological features on Hematoxylin and Eosin 1991 (H&E) related to the classifying the diseases for breast cancer, mitosis count, tubule formation and nuclear pleomorphic. Mitosis counts plays an essential role and an important diagnostic factor for breast cancer grading. Mitosis detection is still a challenging problem because the cells are part of the cell cycle to generate a new nuclear and with different stages of mitosis. However, we implemented a residual learning algorithm for optimization and easiest training; our model is ResNet18 pre-trained to classify with localized based on the Tensorflow framework (TF-DFCNN). Moreover, it is used for avoiding the degradation problem consisted of the normalization function, data augmentation and sampling method to get high accuracy detection. Our deep fully convolutional network (DFCNN) consists of two-stage, where the first stage is used for classification of MITOS-ATYPIA 2014 dataset, which achieves 85% accuracy. In the second stage, we add a new layer to detect the localization depends on Weakly-Supervised Object Localization Concept via a class activation map (CAM) technique for identifying discriminative regions to retrain our CNN model without fully connected layer by combining the framework with localized layer lead the model to be more complex and precise about 93% accuracy.
用于有丝分裂检测的深度全卷积网络
图像识别在医学图像分析领域中起着至关重要的作用,这取决于不同的医学图像分析算法的输入数据、特征、参数和学习类型。苏木精和伊红蛋白1991 (H&E)的三个重要形态学特征与乳腺癌的疾病分类、有丝分裂计数、小管形成和核多形性有关。有丝分裂计数在乳腺癌分级中起着至关重要的作用和重要的诊断因素。有丝分裂检测仍然是一个具有挑战性的问题,因为细胞是产生新核的细胞周期的一部分,并且具有不同的有丝分裂阶段。然而,我们实现了残差学习算法的优化和最简单的训练;我们的模型是基于Tensorflow框架(TF-DFCNN)的ResNet18预训练分类与本地化。同时,避免了归一化函数、数据增强和采样方法构成的退化问题,获得了较高的检测精度。我们的深度全卷积网络(DFCNN)由两阶段组成,其中第一阶段用于MITOS-ATYPIA 2014数据集的分类,准确率达到85%。在第二阶段,我们通过类激活图(class activation map, CAM)识别判别区域的技术,在弱监督对象定位概念的基础上,增加一个新的层来检测定位,再训练没有完全连接层的CNN模型,将框架与局部化层相结合,使模型更加复杂和精确,准确率约为93%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信