Neural network interpretation techniques for analysis of histological images of breast abnormalities

Q3 Medicine
A. Fomina, A. Borbat, E. Karpulevich, Anton Yu. Naumov
{"title":"Neural network interpretation techniques for analysis of histological images of breast abnormalities","authors":"A. Fomina, A. Borbat, E. Karpulevich, Anton Yu. Naumov","doi":"10.26442/20795696.2022.6.201990","DOIUrl":null,"url":null,"abstract":"Background. Neural networks are actively used in digital pathology to analyze histological images and support medical decision-making. A common approach is to solve the classification problem, where only class labels are the only model responses. However, one should understand which areas of the image have the most significant impact on the model's response. Machine learning interpretation techniques help solve this problem. \nAim. To study the consistency of different methods of neural network interpretation when classifying histological images of the breast and to obtain an expert assessment of the results of the evaluated methods. \nMaterials and methods. We performed a preliminary analysis and pre-processing of the existing data set used to train pre-selected neural network models. The existing methods of visualizing the areas of attention of trained models on easy-to-understand data were applied, followed by verification of their correct use. The same neural network models were trained on histological data, and the selected interpretation methods were used to systematize histological images, followed by the evaluation of the results consistency and an expert assessment of the results. \nResults. In this paper, several methods of interpreting machine learning are studied using two different neural network architectures and a set of histological images of breast abnormalities. Results of ResNet18 and ViT-B-16 models training on a set of histological images on the test sample: accuracy metric 0.89 and 0.89, ROC_AUC metric 0.99 and 0.96, respectively. The results were also evaluated by an expert using the Label Studio tool. For each pair of images, the expert was asked to select the most appropriate answer (\"Yes\" or \"No\") to the question: \"The highlighted areas generally correspond to the Malignant class.\" The \"Yes\" response rate for the ResNet_Malignant category was 0.56; for ViT_Malignant, it was 1.0. \nConclusion. Interpretability experiments were conducted with two different architectures: the ResNet18 convolutional network and the ViT-B-16 attention-enhanced network. The results of the trained models were visualized using the GradCAM and Attention Rollout methods, respectively. First, experiments were conducted on a simple-to-interpret dataset to ensure they were used correctly. The methods are then applied to the set of histological images. In easy-to-understand images (cat images), the convolutional network is more consistent with human perception; on the contrary, in histological images of breast cancer, ViT-B-16 provided results much more similar to the expert's perception.","PeriodicalId":36505,"journal":{"name":"Gynecology","volume":"09 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gynecology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26442/20795696.2022.6.201990","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Background. Neural networks are actively used in digital pathology to analyze histological images and support medical decision-making. A common approach is to solve the classification problem, where only class labels are the only model responses. However, one should understand which areas of the image have the most significant impact on the model's response. Machine learning interpretation techniques help solve this problem. Aim. To study the consistency of different methods of neural network interpretation when classifying histological images of the breast and to obtain an expert assessment of the results of the evaluated methods. Materials and methods. We performed a preliminary analysis and pre-processing of the existing data set used to train pre-selected neural network models. The existing methods of visualizing the areas of attention of trained models on easy-to-understand data were applied, followed by verification of their correct use. The same neural network models were trained on histological data, and the selected interpretation methods were used to systematize histological images, followed by the evaluation of the results consistency and an expert assessment of the results. Results. In this paper, several methods of interpreting machine learning are studied using two different neural network architectures and a set of histological images of breast abnormalities. Results of ResNet18 and ViT-B-16 models training on a set of histological images on the test sample: accuracy metric 0.89 and 0.89, ROC_AUC metric 0.99 and 0.96, respectively. The results were also evaluated by an expert using the Label Studio tool. For each pair of images, the expert was asked to select the most appropriate answer ("Yes" or "No") to the question: "The highlighted areas generally correspond to the Malignant class." The "Yes" response rate for the ResNet_Malignant category was 0.56; for ViT_Malignant, it was 1.0. Conclusion. Interpretability experiments were conducted with two different architectures: the ResNet18 convolutional network and the ViT-B-16 attention-enhanced network. The results of the trained models were visualized using the GradCAM and Attention Rollout methods, respectively. First, experiments were conducted on a simple-to-interpret dataset to ensure they were used correctly. The methods are then applied to the set of histological images. In easy-to-understand images (cat images), the convolutional network is more consistent with human perception; on the contrary, in histological images of breast cancer, ViT-B-16 provided results much more similar to the expert's perception.
乳腺异常组织学图像分析的神经网络解释技术
背景。神经网络在数字病理学中被积极应用于组织图像分析和支持医疗决策。一种常见的方法是解决分类问题,其中只有类标签是唯一的模型响应。但是,应该了解图像的哪些区域对模型的响应有最重要的影响。机器学习解释技术有助于解决这个问题。的目标。研究不同神经网络解译方法在乳腺组织图像分类中的一致性,并对评价方法的结果进行专家评价。材料和方法。我们对用于训练预先选择的神经网络模型的现有数据集进行了初步分析和预处理。应用现有的方法在易于理解的数据上可视化训练模型的注意区域,然后验证它们的正确使用。在组织数据上训练相同的神经网络模型,并使用选择的解释方法对组织图像进行系统化,然后对结果一致性进行评估,并对结果进行专家评估。结果。本文使用两种不同的神经网络架构和一组乳腺异常的组织学图像研究了几种解释机器学习的方法。ResNet18和vitb -16模型在一组组织学图像上的训练结果:准确率度量分别为0.89和0.89,ROC_AUC度量分别为0.99和0.96。结果也由专家使用Label Studio工具进行评估。对于每一对图像,专家被要求选择最合适的答案(“是”或“否”)来回答这个问题:“突出显示的区域通常对应于恶性类别。”ResNet_Malignant类别的“Yes”应答率为0.56;ViT_Malignant的版本是1.0。结论。可解释性实验采用两种不同的架构:ResNet18卷积网络和vitb -16注意增强网络。训练模型的结果分别使用GradCAM和Attention Rollout方法进行可视化。首先,在一个易于解释的数据集上进行实验,以确保它们被正确使用。然后将该方法应用于组织学图像集。在易于理解的图像(猫图像)中,卷积网络更符合人类感知;相反,在乳腺癌的组织学图像中,ViT-B-16提供的结果更接近于专家的看法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Gynecology
Gynecology Medicine-Obstetrics and Gynecology
CiteScore
0.70
自引率
0.00%
发文量
52
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信