Digital Volumetric Biopsy Cores Improve Gleason Grading of Prostate Cancer Using Deep Learning.

ArXiv Pub Date : 2024-09-12
Ekaterina Redekop, Mara Pleasure, Zichen Wang, Anthony Sisk, Yang Zong, Kimberly Flores, William Speier, Corey W Arnold
{"title":"Digital Volumetric Biopsy Cores Improve Gleason Grading of Prostate Cancer Using Deep Learning.","authors":"Ekaterina Redekop, Mara Pleasure, Zichen Wang, Anthony Sisk, Yang Zong, Kimberly Flores, William Speier, Corey W Arnold","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Prostate cancer (PCa) was the most frequently diagnosed cancer among American men in 2023 [1]. The histological grading of biopsies is essential for diagnosis, and various deep learning-based solutions have been developed to assist with this task. Existing deep learning frameworks are typically applied to individual 2D cross-sections sliced from 3D biopsy tissue specimens. This process impedes the analysis of complex tissue structures such as glands, which can vary depending on the tissue slice examined. We propose a novel digital pathology data source called a \"volumetric core,\" obtained via the extraction and co-alignment of serially sectioned tissue sections using a novel morphology-preserving alignment framework. We trained an attention-based multiple-instance learning (ABMIL) framework on deep features extracted from volumetric patches to automatically classify the Gleason Grade Group (GGG). To handle volumetric patches, we used a modified video transformer with a deep feature extractor pretrained using self-supervised learning. We ran our morphology preserving alignment framework to construct 10,210 volumetric cores, leaving out 30% for pretraining. The rest of the dataset was used to train ABMIL, which resulted in a 0.958 macro-average AUC, 0.671 F1 score, 0.661 precision, and 0.695 recall averaged across all five GGG significantly outperforming the 2D baselines.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419188/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Prostate cancer (PCa) was the most frequently diagnosed cancer among American men in 2023 [1]. The histological grading of biopsies is essential for diagnosis, and various deep learning-based solutions have been developed to assist with this task. Existing deep learning frameworks are typically applied to individual 2D cross-sections sliced from 3D biopsy tissue specimens. This process impedes the analysis of complex tissue structures such as glands, which can vary depending on the tissue slice examined. We propose a novel digital pathology data source called a "volumetric core," obtained via the extraction and co-alignment of serially sectioned tissue sections using a novel morphology-preserving alignment framework. We trained an attention-based multiple-instance learning (ABMIL) framework on deep features extracted from volumetric patches to automatically classify the Gleason Grade Group (GGG). To handle volumetric patches, we used a modified video transformer with a deep feature extractor pretrained using self-supervised learning. We ran our morphology preserving alignment framework to construct 10,210 volumetric cores, leaving out 30% for pretraining. The rest of the dataset was used to train ABMIL, which resulted in a 0.958 macro-average AUC, 0.671 F1 score, 0.661 precision, and 0.695 recall averaged across all five GGG significantly outperforming the 2D baselines.

数字容积活检核心利用深度学习改进前列腺癌的格里森分级
前列腺癌(PCa)是 2023 年美国男性中最常诊断出的癌症。活检组织学分级对于诊断至关重要,目前已开发出各种基于深度学习的解决方案来协助完成这项任务。现有的深度学习框架通常应用于从三维活检组织标本中切片的单个二维横截面。这一过程阻碍了对复杂组织结构(如腺体)的分析,因为腺体结构会因检查的组织切片不同而变化。我们提出了一种名为 "体积核心 "的新型数字病理学数据源,它是通过使用新型形态保存配准框架提取并共同配准连续切片的组织切片而获得的。我们对基于注意力的多实例学习(ABMIL)框架进行了训练,利用从体积斑块中提取的深度特征自动对格里森等级组(GGG)进行分类。为了处理体积斑块,我们使用了改进的视频转换器,并使用自监督学习对深度特征提取器进行了预训练。我们使用形态保存配准框架构建了 10,210 个体积核心,其中 30% 用于预训练。数据集的其余部分用于训练 ABMIL,其结果是,在所有五个 GGG 中,ABMIL 的宏观平均 AUC 为 0.958,F1 得分为 0.671,精确度为 0.661,召回率为 0.695,明显优于 2D 基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信