{"title":"用于计算病理学的可解释人工智能可识别模型局限性和组织生物标志物","authors":"Jakub R. Kaczmarzyk, Joel H. Saltz, Peter K. Koo","doi":"arxiv-2409.03080","DOIUrl":null,"url":null,"abstract":"Deep learning models have shown promise in histopathology image analysis, but\ntheir opaque decision-making process poses challenges in high-risk medical\nscenarios. Here we introduce HIPPO, an explainable AI method that interrogates\nattention-based multiple instance learning (ABMIL) models in computational\npathology by generating counterfactual examples through tissue patch\nmodifications in whole slide images. Applying HIPPO to ABMIL models trained to\ndetect breast cancer metastasis reveals that they may overlook small tumors and\ncan be misled by non-tumor tissue, while attention maps$\\unicode{x2014}$widely\nused for interpretation$\\unicode{x2014}$often highlight regions that do not\ndirectly influence predictions. By interpreting ABMIL models trained on a\nprognostic prediction task, HIPPO identified tissue areas with stronger\nprognostic effects than high-attention regions, which sometimes showed\ncounterintuitive influences on risk scores. These findings demonstrate HIPPO's\ncapacity for comprehensive model evaluation, bias detection, and quantitative\nhypothesis testing. HIPPO greatly expands the capabilities of explainable AI\ntools to assess the trustworthy and reliable development, deployment, and\nregulation of weakly-supervised models in computational pathology.","PeriodicalId":501572,"journal":{"name":"arXiv - QuanBio - Tissues and Organs","volume":"40 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable AI for computational pathology identifies model limitations and tissue biomarkers\",\"authors\":\"Jakub R. Kaczmarzyk, Joel H. Saltz, Peter K. Koo\",\"doi\":\"arxiv-2409.03080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models have shown promise in histopathology image analysis, but\\ntheir opaque decision-making process poses challenges in high-risk medical\\nscenarios. Here we introduce HIPPO, an explainable AI method that interrogates\\nattention-based multiple instance learning (ABMIL) models in computational\\npathology by generating counterfactual examples through tissue patch\\nmodifications in whole slide images. Applying HIPPO to ABMIL models trained to\\ndetect breast cancer metastasis reveals that they may overlook small tumors and\\ncan be misled by non-tumor tissue, while attention maps$\\\\unicode{x2014}$widely\\nused for interpretation$\\\\unicode{x2014}$often highlight regions that do not\\ndirectly influence predictions. By interpreting ABMIL models trained on a\\nprognostic prediction task, HIPPO identified tissue areas with stronger\\nprognostic effects than high-attention regions, which sometimes showed\\ncounterintuitive influences on risk scores. These findings demonstrate HIPPO's\\ncapacity for comprehensive model evaluation, bias detection, and quantitative\\nhypothesis testing. HIPPO greatly expands the capabilities of explainable AI\\ntools to assess the trustworthy and reliable development, deployment, and\\nregulation of weakly-supervised models in computational pathology.\",\"PeriodicalId\":501572,\"journal\":{\"name\":\"arXiv - QuanBio - Tissues and Organs\",\"volume\":\"40 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Tissues and Organs\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Tissues and Organs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable AI for computational pathology identifies model limitations and tissue biomarkers
Deep learning models have shown promise in histopathology image analysis, but
their opaque decision-making process poses challenges in high-risk medical
scenarios. Here we introduce HIPPO, an explainable AI method that interrogates
attention-based multiple instance learning (ABMIL) models in computational
pathology by generating counterfactual examples through tissue patch
modifications in whole slide images. Applying HIPPO to ABMIL models trained to
detect breast cancer metastasis reveals that they may overlook small tumors and
can be misled by non-tumor tissue, while attention maps$\unicode{x2014}$widely
used for interpretation$\unicode{x2014}$often highlight regions that do not
directly influence predictions. By interpreting ABMIL models trained on a
prognostic prediction task, HIPPO identified tissue areas with stronger
prognostic effects than high-attention regions, which sometimes showed
counterintuitive influences on risk scores. These findings demonstrate HIPPO's
capacity for comprehensive model evaluation, bias detection, and quantitative
hypothesis testing. HIPPO greatly expands the capabilities of explainable AI
tools to assess the trustworthy and reliable development, deployment, and
regulation of weakly-supervised models in computational pathology.