Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens

IF 20.6 Q1 OPTICS
Chiho Yoon, Eunwoo Park, Sampa Misra, Jin Young Kim, Jin Woo Baik, Kwang Gi Kim, Chan Kwon Jung, Chulhong Kim
{"title":"Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens","authors":"Chiho Yoon, Eunwoo Park, Sampa Misra, Jin Young Kim, Jin Woo Baik, Kwang Gi Kim, Chan Kwon Jung, Chulhong Kim","doi":"10.1038/s41377-024-01554-7","DOIUrl":null,"url":null,"abstract":"<p>In pathological diagnostics, histological images highlight the oncological features of excised specimens, but they require laborious and costly staining procedures. Despite recent innovations in label-free microscopy that simplify complex staining procedures, technical limitations and inadequate histological visualization are still problems in clinical settings. Here, we demonstrate an interconnected deep learning (DL)-based framework for performing automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework comprises three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&amp;E (VHE) staining, (2) an U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The framework demonstrates promising performance at each step of its application to human liver cancers. In virtual staining, the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm, making VHE images highly similar to real H&amp;E ones. In segmentation, various features (e.g., the cell area, number of cells, and the distance between cell nuclei) have been successfully segmented in VHE images. Finally, by using deep feature vectors from PAH, VHE, and segmented images, StepFF has achieved a 98.00% classification accuracy, compared to the 94.80% accuracy of conventional PAH classification. In particular, StepFF’s classification reached a sensitivity of 100% based on the evaluation of three pathologists, demonstrating its applicability in real clinical settings. This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.</p>","PeriodicalId":18069,"journal":{"name":"Light-Science & Applications","volume":null,"pages":null},"PeriodicalIF":20.6000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Light-Science & Applications","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1038/s41377-024-01554-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

In pathological diagnostics, histological images highlight the oncological features of excised specimens, but they require laborious and costly staining procedures. Despite recent innovations in label-free microscopy that simplify complex staining procedures, technical limitations and inadequate histological visualization are still problems in clinical settings. Here, we demonstrate an interconnected deep learning (DL)-based framework for performing automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework comprises three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) an U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The framework demonstrates promising performance at each step of its application to human liver cancers. In virtual staining, the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm, making VHE images highly similar to real H&E ones. In segmentation, various features (e.g., the cell area, number of cells, and the distance between cell nuclei) have been successfully segmented in VHE images. Finally, by using deep feature vectors from PAH, VHE, and segmented images, StepFF has achieved a 98.00% classification accuracy, compared to the 94.80% accuracy of conventional PAH classification. In particular, StepFF’s classification reached a sensitivity of 100% based on the evaluation of three pathologists, demonstrating its applicability in real clinical settings. This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.

Abstract Image

基于深度学习的人体标本无标记光声组织学虚拟染色、分割和分类技术
在病理诊断中,组织学图像能突出切除标本的肿瘤学特征,但需要费力且昂贵的染色过程。尽管最近在无标记显微镜方面的创新简化了复杂的染色过程,但技术限制和组织学可视化不足仍然是临床中的问题。在此,我们展示了一种基于深度学习(DL)的互联框架,用于在人体标本的无标记光声组织学(PAH)中执行自动虚拟染色、分割和分类。该框架由三个部分组成:(1) 用于虚拟 H&E (VHE) 染色的可解释对比非配对翻译(E-CUT)方法,(2) 用于特征分割的 U-net 架构,以及 (3) 用于分类的基于 DL 的逐步特征融合方法(StepFF)。该框架在应用于人类肝癌的每个步骤中都表现出良好的性能。在虚拟染色中,E-CUT 保留了细胞核和细胞质的形态,使 VHE 图像与真实的 H&E 图像高度相似。在分割方面,已成功分割出 VHE 图像中的各种特征(如细胞面积、细胞数量和细胞核之间的距离)。最后,通过使用 PAH、VHE 和分割图像的深度特征向量,StepFF 的分类准确率达到了 98.00%,而传统 PAH 分类的准确率为 94.80%。特别是,根据三位病理学家的评估,StepFF 的分类灵敏度达到了 100%,证明了其在实际临床环境中的适用性。这一系列用于无标记 PAH 的 DL 方法作为数字病理学的实用临床策略具有巨大的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Light-Science & Applications
Light-Science & Applications 数理科学, 物理学I, 光学, 凝聚态物性 II :电子结构、电学、磁学和光学性质, 无机非金属材料, 无机非金属类光电信息与功能材料, 工程与材料, 信息科学, 光学和光电子学, 光学和光电子材料, 非线性光学与量子光学
自引率
0.00%
发文量
803
审稿时长
2.1 months
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信