GRAPHITE: Graph-based interpretable tissue examination for enhanced explainability in breast cancer histopathology

IF 6.3 2区 医学 Q1 BIOLOGY
Raktim Kumar Mondol , Ewan K.A. Millar , Peter H. Graham , Lois Browne , Arcot Sowmya , Erik Meijering
{"title":"GRAPHITE: Graph-based interpretable tissue examination for enhanced explainability in breast cancer histopathology","authors":"Raktim Kumar Mondol ,&nbsp;Ewan K.A. Millar ,&nbsp;Peter H. Graham ,&nbsp;Lois Browne ,&nbsp;Arcot Sowmya ,&nbsp;Erik Meijering","doi":"10.1016/j.compbiomed.2025.111106","DOIUrl":null,"url":null,"abstract":"<div><div>Explainable AI (XAI) in medical histopathology is essential for enhancing the interpretability and clinical trustworthiness of deep learning models in cancer diagnosis. However, the black-box nature of these models often limits their clinical adoption. We introduce GRAPHITE (Graph-based Interpretable Tissue Examination), a post-hoc explainable framework designed for breast cancer tissue microarray (TMA) analysis. GRAPHITE employs a multiscale approach, extracting patches at various magnification levels, constructing an hierarchical graph, and utilising graph attention networks (GAT) with scalewise attention (SAN) to capture scale-dependent features. We trained the model on 140 tumour TMA cores and four benign whole slide images from which 140 benign samples were created, and tested it on 53 pathologist-annotated TMA samples. GRAPHITE outperformed traditional XAI methods, achieving a mean average precision (mAP) of 0.56, an area under the receiver operating characteristic curve (AUROC) of 0.94, and a threshold robustness (ThR) of 0.70, indicating that the model maintains high performance across a wide range of thresholds. In clinical utility, GRAPHITE achieved the highest area under the decision curve (AUDC) of 4.17e+5, indicating reliable decision support across thresholds. These results highlight GRAPHITE’s potential as a clinically valuable tool in computational pathology, providing interpretable visualisations that align with the pathologists’ diagnostic reasoning and support precision medicine.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"197 ","pages":"Article 111106"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010482525014581","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Explainable AI (XAI) in medical histopathology is essential for enhancing the interpretability and clinical trustworthiness of deep learning models in cancer diagnosis. However, the black-box nature of these models often limits their clinical adoption. We introduce GRAPHITE (Graph-based Interpretable Tissue Examination), a post-hoc explainable framework designed for breast cancer tissue microarray (TMA) analysis. GRAPHITE employs a multiscale approach, extracting patches at various magnification levels, constructing an hierarchical graph, and utilising graph attention networks (GAT) with scalewise attention (SAN) to capture scale-dependent features. We trained the model on 140 tumour TMA cores and four benign whole slide images from which 140 benign samples were created, and tested it on 53 pathologist-annotated TMA samples. GRAPHITE outperformed traditional XAI methods, achieving a mean average precision (mAP) of 0.56, an area under the receiver operating characteristic curve (AUROC) of 0.94, and a threshold robustness (ThR) of 0.70, indicating that the model maintains high performance across a wide range of thresholds. In clinical utility, GRAPHITE achieved the highest area under the decision curve (AUDC) of 4.17e+5, indicating reliable decision support across thresholds. These results highlight GRAPHITE’s potential as a clinically valuable tool in computational pathology, providing interpretable visualisations that align with the pathologists’ diagnostic reasoning and support precision medicine.

Abstract Image

石墨:基于图的可解释组织检查,增强乳腺癌组织病理学的可解释性
医学组织病理学中的可解释人工智能(XAI)对于提高深度学习模型在癌症诊断中的可解释性和临床可信度至关重要。然而,这些模型的黑箱性质往往限制了它们的临床应用。我们介绍石墨(基于图的可解释组织检查),一个专为乳腺癌组织微阵列(TMA)分析设计的事后可解释框架。石墨采用多尺度方法,在不同的放大级别提取斑块,构建分层图,并利用图注意网络(GAT)和尺度注意(SAN)来捕获尺度相关的特征。我们在140个肿瘤TMA核心和4个良性全切片图像上训练模型,其中140个良性样本被创建,并在53个病理注释的TMA样本上测试它。石墨优于传统的XAI方法,平均精度(mAP)为0.56,接收工作特征曲线下面积(AUROC)为0.94,阈值鲁棒性(ThR)为0.70,表明该模型在较宽的阈值范围内保持高性能。在临床应用中,GRAPHITE的决策曲线下面积(AUDC)最高,为4.17e+5,表明该方法具有可靠的跨阈值决策支持。这些结果突出了石墨在计算病理学中作为临床有价值工具的潜力,它提供了与病理学家诊断推理一致的可解释的可视化结果,并支持精准医学。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers in biology and medicine
Computers in biology and medicine 工程技术-工程:生物医学
CiteScore
11.70
自引率
10.40%
发文量
1086
审稿时长
74 days
期刊介绍: Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信