{"title":"基于定量的深度学习决策的可解释人工智能:肝细胞癌鉴别定量形态学特征的聚类和可视化。","authors":"Gen Takagi, Saori Takeyama, Tokiya Abe, Akinori Hashiguchi, Michiie Sakamoto, Kenji Suzuki, Masahiro Yamaguchi","doi":"10.1117/1.JMI.12.6.061407","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a \"black box\" with limited interpretability. This lack of transparency hinders its clinical adoption, emphasizing the need for quantitative explainable artificial intelligence (QXAI) methods. We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.</p><p><strong>Approach: </strong>The proposed method utilizes clustering in the latent space of embeddings generated by a DL model to identify regions that contribute to the model's discrimination. Each cluster is then quantitatively characterized by morphometric features obtained through nuclear segmentation using HoverNet and key feature selection with LightGBM. Statistical analysis is performed to assess the importance of selected features, ensuring an interpretable relationship between morphological characteristics and classification outcomes. This approach enables the quantitative interpretation of which regions and features are critical for the model's decision-making, without sacrificing accuracy.</p><p><strong>Results: </strong>Experiments on pathology images of hematoxylin-and-eosin-stained HCC tissue sections showed that the proposed method effectively identified key discriminatory regions and features, such as nuclear size, chromatin density, and shape irregularity. The clustering-based analysis provided structured insights into morphological patterns influencing classification, with explanations evaluated as clinically relevant and interpretable by a pathologist.</p><p><strong>Conclusions: </strong>Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061407"},"PeriodicalIF":1.7000,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12513858/pdf/","citationCount":"0","resultStr":"{\"title\":\"Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination.\",\"authors\":\"Gen Takagi, Saori Takeyama, Tokiya Abe, Akinori Hashiguchi, Michiie Sakamoto, Kenji Suzuki, Masahiro Yamaguchi\",\"doi\":\"10.1117/1.JMI.12.6.061407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a \\\"black box\\\" with limited interpretability. This lack of transparency hinders its clinical adoption, emphasizing the need for quantitative explainable artificial intelligence (QXAI) methods. We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.</p><p><strong>Approach: </strong>The proposed method utilizes clustering in the latent space of embeddings generated by a DL model to identify regions that contribute to the model's discrimination. Each cluster is then quantitatively characterized by morphometric features obtained through nuclear segmentation using HoverNet and key feature selection with LightGBM. Statistical analysis is performed to assess the importance of selected features, ensuring an interpretable relationship between morphological characteristics and classification outcomes. This approach enables the quantitative interpretation of which regions and features are critical for the model's decision-making, without sacrificing accuracy.</p><p><strong>Results: </strong>Experiments on pathology images of hematoxylin-and-eosin-stained HCC tissue sections showed that the proposed method effectively identified key discriminatory regions and features, such as nuclear size, chromatin density, and shape irregularity. The clustering-based analysis provided structured insights into morphological patterns influencing classification, with explanations evaluated as clinically relevant and interpretable by a pathologist.</p><p><strong>Conclusions: </strong>Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.</p>\",\"PeriodicalId\":47707,\"journal\":{\"name\":\"Journal of Medical Imaging\",\"volume\":\"12 6\",\"pages\":\"061407\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12513858/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1117/1.JMI.12.6.061407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/10/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.12.6.061407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/10/11 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination.
Purpose: Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a "black box" with limited interpretability. This lack of transparency hinders its clinical adoption, emphasizing the need for quantitative explainable artificial intelligence (QXAI) methods. We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.
Approach: The proposed method utilizes clustering in the latent space of embeddings generated by a DL model to identify regions that contribute to the model's discrimination. Each cluster is then quantitatively characterized by morphometric features obtained through nuclear segmentation using HoverNet and key feature selection with LightGBM. Statistical analysis is performed to assess the importance of selected features, ensuring an interpretable relationship between morphological characteristics and classification outcomes. This approach enables the quantitative interpretation of which regions and features are critical for the model's decision-making, without sacrificing accuracy.
Results: Experiments on pathology images of hematoxylin-and-eosin-stained HCC tissue sections showed that the proposed method effectively identified key discriminatory regions and features, such as nuclear size, chromatin density, and shape irregularity. The clustering-based analysis provided structured insights into morphological patterns influencing classification, with explanations evaluated as clinically relevant and interpretable by a pathologist.
Conclusions: Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.