Syed Jawad Hussain Shah , Ahmed Albishri , Rong Wang , Yugyung Lee
{"title":"Integrating local and global attention mechanisms for enhanced oral cancer detection and explainability","authors":"Syed Jawad Hussain Shah , Ahmed Albishri , Rong Wang , Yugyung Lee","doi":"10.1016/j.compbiomed.2025.109841","DOIUrl":null,"url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Early detection of Oral Squamous Cell Carcinoma (OSCC) improves survival rates, but traditional diagnostic methods often produce inconsistent results. This study introduces the Oral Cancer Attention Network (OCANet), a U-Net-based architecture designed to enhance tumor segmentation in hematoxylin and eosin (H&E)-stained images. By integrating local and global attention mechanisms, OCANet captures complex cancerous patterns that existing deep-learning models may overlook. A Large Language Model (LLM) analyzes feature maps and Grad-CAM visualizations to improve interpretability, providing insights into the model’s decision-making process.</div></div><div><h3>Methods:</h3><div>OCANet incorporates the Channel and Spatial Attention Fusion (CSAF) module, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and residual connections to refine feature extraction and segmentation. The model was evaluated on the Oral Cavity-Derived Cancer (OCDC) and Oral Cancer Annotated (ORCA) datasets and the DigestPath colon tumor dataset to assess generalizability. Performance was measured using accuracy, Dice Similarity Coefficient (DSC), and mean Intersection over Union (mIoU), focusing on class-specific segmentation performance.</div></div><div><h3>Results:</h3><div>OCANet outperformed state-of-the-art models across all datasets. On ORCA, it achieved 90.98% accuracy, 86.14% DSC, and 77.10% mIoU. On OCDC, it reached 98.24% accuracy, 94.09% DSC, and 88.84% mIoU. On DigestPath, it demonstrated strong generalization with 84.65% DSC despite limited training data. The model showed superior carcinoma detection performance, distinguishing cancerous from non-cancerous regions with high specificity.</div></div><div><h3>Conclusion:</h3><div>OCANet enhances tumor segmentation accuracy and interpretability in histopathological images by integrating advanced attention mechanisms. Combining visual and textual insights, its multimodal explainability framework improves transparency while supporting clinical decision-making. With strong generalization across datasets and computational efficiency, OCANet presents a promising tool for oral and other cancer diagnostics, particularly in resource-limited settings.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"189 ","pages":"Article 109841"},"PeriodicalIF":7.0000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S001048252500191X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background and Objective:
Early detection of Oral Squamous Cell Carcinoma (OSCC) improves survival rates, but traditional diagnostic methods often produce inconsistent results. This study introduces the Oral Cancer Attention Network (OCANet), a U-Net-based architecture designed to enhance tumor segmentation in hematoxylin and eosin (H&E)-stained images. By integrating local and global attention mechanisms, OCANet captures complex cancerous patterns that existing deep-learning models may overlook. A Large Language Model (LLM) analyzes feature maps and Grad-CAM visualizations to improve interpretability, providing insights into the model’s decision-making process.
Methods:
OCANet incorporates the Channel and Spatial Attention Fusion (CSAF) module, Squeeze-and-Excitation (SE) blocks, Atrous Spatial Pyramid Pooling (ASPP), and residual connections to refine feature extraction and segmentation. The model was evaluated on the Oral Cavity-Derived Cancer (OCDC) and Oral Cancer Annotated (ORCA) datasets and the DigestPath colon tumor dataset to assess generalizability. Performance was measured using accuracy, Dice Similarity Coefficient (DSC), and mean Intersection over Union (mIoU), focusing on class-specific segmentation performance.
Results:
OCANet outperformed state-of-the-art models across all datasets. On ORCA, it achieved 90.98% accuracy, 86.14% DSC, and 77.10% mIoU. On OCDC, it reached 98.24% accuracy, 94.09% DSC, and 88.84% mIoU. On DigestPath, it demonstrated strong generalization with 84.65% DSC despite limited training data. The model showed superior carcinoma detection performance, distinguishing cancerous from non-cancerous regions with high specificity.
Conclusion:
OCANet enhances tumor segmentation accuracy and interpretability in histopathological images by integrating advanced attention mechanisms. Combining visual and textual insights, its multimodal explainability framework improves transparency while supporting clinical decision-making. With strong generalization across datasets and computational efficiency, OCANet presents a promising tool for oral and other cancer diagnostics, particularly in resource-limited settings.
期刊介绍:
Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.