{"title":"连接临床知识和人工智能在胸部放射学中的可解释性","authors":"Mengze Xu","doi":"10.1002/ird3.70015","DOIUrl":null,"url":null,"abstract":"<p>Yuan's study [<span>1</span>] entitled “<i>Anatomic Boundary-Aware Explanation for Convolutional Neural Networks in Diagnostic Radiology</i>” underscores a fundamental gap in existing XAI approaches: the neglect of clinical domain knowledge. Thoracic diseases primarily manifest within specific anatomical regions, such as the lung parenchyma. Yet, conventional XAI methods such as Grad-CAM or Integrated Gradients often highlight extraneous areas (e.g., medical devices, chest wall artifacts), leading to misinterpretations. By leveraging anatomic boundaries derived from a pretrained lung segmentation model, the authors enforce spatial constraints on CNN explanations, aligning them with clinically relevant regions. This innovation is particularly impactful for resource-limited settings, where annotations for fine-grained lesion localization are scarce.</p><p>The study's quantitative results are compelling: Across 72 scenarios involving 3 CNN architectures, 4 diseases, and 2 classification settings, the boundary-aware method outperformed baseline explanations in 71 cases. For example, in pneumothorax detection, the dice similarity coefficient (DSC) improved by up to 5.09% when integrating anatomic constraints. These findings validate the hypothesis that incorporating radiological expertise into XAI frameworks enhances explanation fidelity.</p><p>The paper's strengths lie in its plug-and-play design and transfer learning strategy. By decoupling lung segmentation from the CNN classifier, the authors avoid retraining on annotated target datasets, reducing computational and labeling costs. The use of publicly available segmentation datasets (e.g., Japanese Society of Radiological Technology) ensures reproducibility and scalability. However, this approach assumes minimal domain shift between external and target datasets. Future studies should evaluate robustness across diverse imaging protocols or patient populations, where anatomical variations (e.g., emphysematous lungs, postsurgical changes) might affect segmentation accuracy. Another notable aspect is the comprehensive evaluation of multiple XAI methods (saliency map, Grad-CAM, Integrated Gradients) and CNN architectures (VGG-11, ResNet-18, AlexNet) [<span>2</span>]. The consistent improvements observed across these configurations suggest the boundary-aware framework is generalizable. However, the reliance on lightweight CNNs (e.g., VGG-11) raises questions about applicability to modern, deeper models (e.g., vision transformers), which may require different regularization strategies.</p><p>A limitation is the qualitative gap between improved metrics and clinical utility. Although intersection over union and DSC metrics quantify overlap with ground-truth lesions, they do not directly measure radiologists' trust in AI explanations. Future work should incorporate human-in-the-loop studies to assess how boundary-aware explanations influence diagnostic decisions and workflow efficiency.</p><p>Yuan's approach opens new avenues for integrating domain knowledge into XAI. For instance, extending anatomical constraints to other organs (e.g., heart, mediastinum) could enhance explanations for complex pathologies such as aortic aneurysms. Additionally, combining boundary-aware XAI with weakly supervised learning might improve lesion segmentation accuracy, addressing the study's low DSC values (e.g., < 10% for certain mass explanations).</p><p>The paper also highlights the tension between multilabel and binary classification. Although binary classifiers showed superior explanation performance, clinical practice often demands multilabel predictions. Future research could explore hybrid approaches, such as using binary classifiers as building blocks for multilabel tasks, as proposed by Shiraishi and Fukumizu [<span>3</span>].</p><p>Advanced segmentation models: Incorporate state-of-the-art segmentation tools such as MedSAM [<span>4</span>] to improve boundary precision, especially in challenging cases. Dynamic constraints: Develop disease-specific boundaries (e.g., differentiating pneumothorax from atelectasis) to refine explanations further. Real-world validation: Conduct randomized controlled trials to evaluate how boundary-aware explanations affect radiologists' diagnostic accuracy and confidence. Generalization to other modalities: Adapt the framework to computed tomography or magnetic resonance imaging, where organ segmentation is equally critical.</p><p>Yuan's study represents a pivotal step toward bridging clinical knowledge and AI interpretability in thoracic radiology. By constraining CNN explanations to anatomical boundaries, the authors demonstrate that domain-specific regularization can mitigate shortcut learning and align AI reasoning with clinical intuition. Although challenges remain—including validation in diverse populations and integration with advanced models—the proposed framework sets a precedent for knowledge-driven XAI in medical imaging. As AI transitions from research to clinical practice, such innovations will be essential for fostering trust and ensuring safe, effective patient care.</p><p><b>Mengze Xu:</b> conceptualization (lead), investigation (lead), supervision (lead), writing – original draft (lead).</p><p>The author has nothing to report.</p><p>The author has nothing to report.</p><p>The author declares no conflicts of interest.</p>","PeriodicalId":73508,"journal":{"name":"iRadiology","volume":"3 4","pages":"311-312"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ird3.70015","citationCount":"0","resultStr":"{\"title\":\"Bridging Clinical Knowledge and AI Interpretability in Thoracic Radiology\",\"authors\":\"Mengze Xu\",\"doi\":\"10.1002/ird3.70015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Yuan's study [<span>1</span>] entitled “<i>Anatomic Boundary-Aware Explanation for Convolutional Neural Networks in Diagnostic Radiology</i>” underscores a fundamental gap in existing XAI approaches: the neglect of clinical domain knowledge. Thoracic diseases primarily manifest within specific anatomical regions, such as the lung parenchyma. Yet, conventional XAI methods such as Grad-CAM or Integrated Gradients often highlight extraneous areas (e.g., medical devices, chest wall artifacts), leading to misinterpretations. By leveraging anatomic boundaries derived from a pretrained lung segmentation model, the authors enforce spatial constraints on CNN explanations, aligning them with clinically relevant regions. This innovation is particularly impactful for resource-limited settings, where annotations for fine-grained lesion localization are scarce.</p><p>The study's quantitative results are compelling: Across 72 scenarios involving 3 CNN architectures, 4 diseases, and 2 classification settings, the boundary-aware method outperformed baseline explanations in 71 cases. For example, in pneumothorax detection, the dice similarity coefficient (DSC) improved by up to 5.09% when integrating anatomic constraints. These findings validate the hypothesis that incorporating radiological expertise into XAI frameworks enhances explanation fidelity.</p><p>The paper's strengths lie in its plug-and-play design and transfer learning strategy. By decoupling lung segmentation from the CNN classifier, the authors avoid retraining on annotated target datasets, reducing computational and labeling costs. The use of publicly available segmentation datasets (e.g., Japanese Society of Radiological Technology) ensures reproducibility and scalability. However, this approach assumes minimal domain shift between external and target datasets. Future studies should evaluate robustness across diverse imaging protocols or patient populations, where anatomical variations (e.g., emphysematous lungs, postsurgical changes) might affect segmentation accuracy. Another notable aspect is the comprehensive evaluation of multiple XAI methods (saliency map, Grad-CAM, Integrated Gradients) and CNN architectures (VGG-11, ResNet-18, AlexNet) [<span>2</span>]. The consistent improvements observed across these configurations suggest the boundary-aware framework is generalizable. However, the reliance on lightweight CNNs (e.g., VGG-11) raises questions about applicability to modern, deeper models (e.g., vision transformers), which may require different regularization strategies.</p><p>A limitation is the qualitative gap between improved metrics and clinical utility. Although intersection over union and DSC metrics quantify overlap with ground-truth lesions, they do not directly measure radiologists' trust in AI explanations. Future work should incorporate human-in-the-loop studies to assess how boundary-aware explanations influence diagnostic decisions and workflow efficiency.</p><p>Yuan's approach opens new avenues for integrating domain knowledge into XAI. For instance, extending anatomical constraints to other organs (e.g., heart, mediastinum) could enhance explanations for complex pathologies such as aortic aneurysms. Additionally, combining boundary-aware XAI with weakly supervised learning might improve lesion segmentation accuracy, addressing the study's low DSC values (e.g., < 10% for certain mass explanations).</p><p>The paper also highlights the tension between multilabel and binary classification. Although binary classifiers showed superior explanation performance, clinical practice often demands multilabel predictions. Future research could explore hybrid approaches, such as using binary classifiers as building blocks for multilabel tasks, as proposed by Shiraishi and Fukumizu [<span>3</span>].</p><p>Advanced segmentation models: Incorporate state-of-the-art segmentation tools such as MedSAM [<span>4</span>] to improve boundary precision, especially in challenging cases. Dynamic constraints: Develop disease-specific boundaries (e.g., differentiating pneumothorax from atelectasis) to refine explanations further. Real-world validation: Conduct randomized controlled trials to evaluate how boundary-aware explanations affect radiologists' diagnostic accuracy and confidence. Generalization to other modalities: Adapt the framework to computed tomography or magnetic resonance imaging, where organ segmentation is equally critical.</p><p>Yuan's study represents a pivotal step toward bridging clinical knowledge and AI interpretability in thoracic radiology. By constraining CNN explanations to anatomical boundaries, the authors demonstrate that domain-specific regularization can mitigate shortcut learning and align AI reasoning with clinical intuition. Although challenges remain—including validation in diverse populations and integration with advanced models—the proposed framework sets a precedent for knowledge-driven XAI in medical imaging. As AI transitions from research to clinical practice, such innovations will be essential for fostering trust and ensuring safe, effective patient care.</p><p><b>Mengze Xu:</b> conceptualization (lead), investigation (lead), supervision (lead), writing – original draft (lead).</p><p>The author has nothing to report.</p><p>The author has nothing to report.</p><p>The author declares no conflicts of interest.</p>\",\"PeriodicalId\":73508,\"journal\":{\"name\":\"iRadiology\",\"volume\":\"3 4\",\"pages\":\"311-312\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ird3.70015\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"iRadiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ird3.70015\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"iRadiology","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ird3.70015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bridging Clinical Knowledge and AI Interpretability in Thoracic Radiology
Yuan's study [1] entitled “Anatomic Boundary-Aware Explanation for Convolutional Neural Networks in Diagnostic Radiology” underscores a fundamental gap in existing XAI approaches: the neglect of clinical domain knowledge. Thoracic diseases primarily manifest within specific anatomical regions, such as the lung parenchyma. Yet, conventional XAI methods such as Grad-CAM or Integrated Gradients often highlight extraneous areas (e.g., medical devices, chest wall artifacts), leading to misinterpretations. By leveraging anatomic boundaries derived from a pretrained lung segmentation model, the authors enforce spatial constraints on CNN explanations, aligning them with clinically relevant regions. This innovation is particularly impactful for resource-limited settings, where annotations for fine-grained lesion localization are scarce.
The study's quantitative results are compelling: Across 72 scenarios involving 3 CNN architectures, 4 diseases, and 2 classification settings, the boundary-aware method outperformed baseline explanations in 71 cases. For example, in pneumothorax detection, the dice similarity coefficient (DSC) improved by up to 5.09% when integrating anatomic constraints. These findings validate the hypothesis that incorporating radiological expertise into XAI frameworks enhances explanation fidelity.
The paper's strengths lie in its plug-and-play design and transfer learning strategy. By decoupling lung segmentation from the CNN classifier, the authors avoid retraining on annotated target datasets, reducing computational and labeling costs. The use of publicly available segmentation datasets (e.g., Japanese Society of Radiological Technology) ensures reproducibility and scalability. However, this approach assumes minimal domain shift between external and target datasets. Future studies should evaluate robustness across diverse imaging protocols or patient populations, where anatomical variations (e.g., emphysematous lungs, postsurgical changes) might affect segmentation accuracy. Another notable aspect is the comprehensive evaluation of multiple XAI methods (saliency map, Grad-CAM, Integrated Gradients) and CNN architectures (VGG-11, ResNet-18, AlexNet) [2]. The consistent improvements observed across these configurations suggest the boundary-aware framework is generalizable. However, the reliance on lightweight CNNs (e.g., VGG-11) raises questions about applicability to modern, deeper models (e.g., vision transformers), which may require different regularization strategies.
A limitation is the qualitative gap between improved metrics and clinical utility. Although intersection over union and DSC metrics quantify overlap with ground-truth lesions, they do not directly measure radiologists' trust in AI explanations. Future work should incorporate human-in-the-loop studies to assess how boundary-aware explanations influence diagnostic decisions and workflow efficiency.
Yuan's approach opens new avenues for integrating domain knowledge into XAI. For instance, extending anatomical constraints to other organs (e.g., heart, mediastinum) could enhance explanations for complex pathologies such as aortic aneurysms. Additionally, combining boundary-aware XAI with weakly supervised learning might improve lesion segmentation accuracy, addressing the study's low DSC values (e.g., < 10% for certain mass explanations).
The paper also highlights the tension between multilabel and binary classification. Although binary classifiers showed superior explanation performance, clinical practice often demands multilabel predictions. Future research could explore hybrid approaches, such as using binary classifiers as building blocks for multilabel tasks, as proposed by Shiraishi and Fukumizu [3].
Advanced segmentation models: Incorporate state-of-the-art segmentation tools such as MedSAM [4] to improve boundary precision, especially in challenging cases. Dynamic constraints: Develop disease-specific boundaries (e.g., differentiating pneumothorax from atelectasis) to refine explanations further. Real-world validation: Conduct randomized controlled trials to evaluate how boundary-aware explanations affect radiologists' diagnostic accuracy and confidence. Generalization to other modalities: Adapt the framework to computed tomography or magnetic resonance imaging, where organ segmentation is equally critical.
Yuan's study represents a pivotal step toward bridging clinical knowledge and AI interpretability in thoracic radiology. By constraining CNN explanations to anatomical boundaries, the authors demonstrate that domain-specific regularization can mitigate shortcut learning and align AI reasoning with clinical intuition. Although challenges remain—including validation in diverse populations and integration with advanced models—the proposed framework sets a precedent for knowledge-driven XAI in medical imaging. As AI transitions from research to clinical practice, such innovations will be essential for fostering trust and ensuring safe, effective patient care.