Unsupervised many-to-many stain translation for histological image augmentation to improve classification accuracy

Q2 Medicine
Maryam Berijanian , Nadine S. Schaadt , Boqiang Huang , Johannes Lotz , Friedrich Feuerhake , Dorit Merhof
{"title":"Unsupervised many-to-many stain translation for histological image augmentation to improve classification accuracy","authors":"Maryam Berijanian ,&nbsp;Nadine S. Schaadt ,&nbsp;Boqiang Huang ,&nbsp;Johannes Lotz ,&nbsp;Friedrich Feuerhake ,&nbsp;Dorit Merhof","doi":"10.1016/j.jpi.2023.100195","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.</p></div><div><h3>Methods</h3><p>StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.</p></div><div><h3>Results</h3><p>The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.</p></div><div><h3>Conclusions</h3><p>This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a5/6e/main.PMC9947329.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pathology Informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2153353923000093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.

Methods

StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.

Results

The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.

Conclusions

This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.

Abstract Image

Abstract Image

Abstract Image

用于组织学图像增强的无监督多对多染色翻译提高分类精度
需要大量图像的深度学习任务在数字病理学中得到了广泛的应用。这给监督任务带来了挑战,因为手动图像注释是一个昂贵且费力的过程。在图像变化很大的情况下,这种情况更加恶化。解决这一问题需要采用图像增强和合成图像生成等方法。在这方面,基于gan的无监督染色翻译最近得到了很多关注,但必须为每对源域和目标域训练一个单独的网络。这项工作使组织病理学染色的无监督多对多翻译与单一的网络,同时寻求保持组织的形状和结构。方法stargan -v2适用于乳腺组织病理图像的无监督多对多染色翻译。包含边缘检测器以激励网络保持组织的形状和结构并具有边缘保持平移。此外,对数字病理学领域的医学和技术专家进行主观测试,以评估生成的图像的质量,并验证它们与真实图像无法区分。作为概念验证,我们分别使用生成的图像和不使用生成的图像对乳腺癌分类器进行训练,以量化使用合成图像增强图像对分类精度的影响。结果结果表明,加入边缘检测器有助于提高翻译图像的质量,并保持组织的总体结构。我们的医疗和技术专家的质量控制和主观测试表明,无法区分真实和人工图像,从而确认合成图像在技术上是可信的。此外,本研究表明,通过将所提出的标记翻译方法的输出增强训练数据集,ResNet-50和VGG-16的乳腺癌分类器的准确率分别提高了8.0%和9.3%。结论本研究表明,在提出的框架内,可以有效地完成从任意源污点到其他污点的翻译。生成的图像是真实的,可以用来训练深度神经网络,以提高其性能,并解决注释图像数量不足的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Pathology Informatics
Journal of Pathology Informatics Medicine-Pathology and Forensic Medicine
CiteScore
3.70
自引率
0.00%
发文量
2
审稿时长
18 weeks
期刊介绍: The Journal of Pathology Informatics (JPI) is an open access peer-reviewed journal dedicated to the advancement of pathology informatics. This is the official journal of the Association for Pathology Informatics (API). The journal aims to publish broadly about pathology informatics and freely disseminate all articles worldwide. This journal is of interest to pathologists, informaticians, academics, researchers, health IT specialists, information officers, IT staff, vendors, and anyone with an interest in informatics. We encourage submissions from anyone with an interest in the field of pathology informatics. We publish all types of papers related to pathology informatics including original research articles, technical notes, reviews, viewpoints, commentaries, editorials, symposia, meeting abstracts, book reviews, and correspondence to the editors. All submissions are subject to rigorous peer review by the well-regarded editorial board and by expert referees in appropriate specialties.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信