Self-Attentive Adversarial Stain Normalization.

Aman Shrivastava, William Adorno, Yash Sharma, Lubaina Ehsan, S Asad Ali, Sean R Moore, Beatrice Amadi, Paul Kelly, Sana Syed, Donald E Brown
{"title":"Self-Attentive Adversarial Stain Normalization.","authors":"Aman Shrivastava,&nbsp;William Adorno,&nbsp;Yash Sharma,&nbsp;Lubaina Ehsan,&nbsp;S Asad Ali,&nbsp;Sean R Moore,&nbsp;Beatrice Amadi,&nbsp;Paul Kelly,&nbsp;Sana Syed,&nbsp;Donald E Brown","doi":"10.1007/978-3-030-68763-2_10","DOIUrl":null,"url":null,"abstract":"<p><p>Hematoxylin and Eosin (H&E) stained Whole Slide Images (WSIs) are utilized for biopsy visualization-based diagnostic and prognostic assessment of diseases. Variation in the H&E staining process across different lab sites can lead to significant variations in biopsy image appearance. These variations introduce an undesirable bias when the slides are examined by pathologists or used for training deep learning models. Traditionally proposed stain normalization and color augmentation strategies can handle the human level bias. But deep learning models can easily disentangle the linear transformation used in these approaches, resulting in undesirable bias and lack of generalization. To handle these limitations, we propose a Self-Attentive Adversarial Stain Normalization (SAASN) approach for the normalization of multiple stain appearances to a common domain. This unsupervised generative adversarial approach includes self-attention mechanism for synthesizing images with finer detail while preserving the structural consistency of the biopsy features during translation. SAASN demonstrates consistent and superior performance compared to other popular stain normalization techniques on H&E stained duodenal biopsy image data.</p>","PeriodicalId":93349,"journal":{"name":"Pattern Recognition : ICPR International Workshops and Challenges, virtual event, January 10-15, 2021, proceedings. Part I. International Conference on Pattern Recognition (25th : 2021 : Online)","volume":"12661 ","pages":"120-140"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8528268/pdf/nihms-1696243.pdf","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition : ICPR International Workshops and Challenges, virtual event, January 10-15, 2021, proceedings. Part I. International Conference on Pattern Recognition (25th : 2021 : Online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-68763-2_10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/2/21 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

Hematoxylin and Eosin (H&E) stained Whole Slide Images (WSIs) are utilized for biopsy visualization-based diagnostic and prognostic assessment of diseases. Variation in the H&E staining process across different lab sites can lead to significant variations in biopsy image appearance. These variations introduce an undesirable bias when the slides are examined by pathologists or used for training deep learning models. Traditionally proposed stain normalization and color augmentation strategies can handle the human level bias. But deep learning models can easily disentangle the linear transformation used in these approaches, resulting in undesirable bias and lack of generalization. To handle these limitations, we propose a Self-Attentive Adversarial Stain Normalization (SAASN) approach for the normalization of multiple stain appearances to a common domain. This unsupervised generative adversarial approach includes self-attention mechanism for synthesizing images with finer detail while preserving the structural consistency of the biopsy features during translation. SAASN demonstrates consistent and superior performance compared to other popular stain normalization techniques on H&E stained duodenal biopsy image data.

自关注对抗性染色归一化。
苏木精和伊红(H&E)染色的全切片图像(WSIs)用于基于活检的可视化诊断和疾病的预后评估。不同实验室部位的H&E染色过程的差异会导致活检图像外观的显著差异。当病理学家检查载玻片或用于训练深度学习模型时,这些变化会带来不希望看到的偏差。传统上提出的染色归一化和颜色增强策略可以处理人类水平偏差。但是深度学习模型可以很容易地分解这些方法中使用的线性变换,从而导致不良的偏差和缺乏泛化。为了处理这些限制,我们提出了一种自关注对抗性染色归一化(SAASN)方法,用于将多个染色出现归一化到共同域。这种无监督生成对抗方法包括自注意机制,用于合成具有更精细细节的图像,同时在翻译过程中保持活检特征的结构一致性。与其他流行的染色归一化技术相比,SAASN在H&E染色的十二指肠活检图像数据上表现出一致和优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信