{"title":"基于卷积三重关注和组织病理学引导投票的DeepLabV3+用于浆液性卵巢癌高光谱图像分割。","authors":"Wenrui Tang, Lijun Wei, Zhenfeng Mo, Jiahao Wang, Xuan Zhang, Siqi Zhu, Lvfen Gao","doi":"10.1002/jbio.202500142","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning has been extensively applied in medical image analysis, providing healthcare professionals with more efficient and accurate diagnostic information. Among these advanced semantic segmentation models, the baseline DeepLabV3+ model is more adept at processing low-dimensional data such as RGB images, but its performance on high-dimensional data like hyperspectral images is suboptimal, limiting its generalization and discriminative capabilities. We propose a highly innovative hybrid architecture integrating a Convolutional Triplet Attention Module (CTAM) to capture cross-dimensional spectral-spatial dependencies and a Histopathology-Guided Voting Mechanism (HVM) to incorporate WHO diagnostic criteria. The results demonstrate that our model can accurately differentiate and localize low-grade and high-grade serous ovarian cancer tissues, with an accuracy of 92.7% and 90.2%, respectively. Furthermore, our performance exceeds the pathologist's consensus (85.4%) and surpasses state-of-the-art models (e.g., U-Net, PAN, FPN) by a significant margin of over 20% in LGSC classification, rigorously validating its scientific superiority.</p>","PeriodicalId":94068,"journal":{"name":"Journal of biophotonics","volume":" ","pages":"e202500142"},"PeriodicalIF":2.3000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DeepLabV3+ With Convolutional Triplet Attention and Histopathology-Guided Voting for Hyperspectral Image Segmentation of Serous Ovarian Cancer.\",\"authors\":\"Wenrui Tang, Lijun Wei, Zhenfeng Mo, Jiahao Wang, Xuan Zhang, Siqi Zhu, Lvfen Gao\",\"doi\":\"10.1002/jbio.202500142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Deep learning has been extensively applied in medical image analysis, providing healthcare professionals with more efficient and accurate diagnostic information. Among these advanced semantic segmentation models, the baseline DeepLabV3+ model is more adept at processing low-dimensional data such as RGB images, but its performance on high-dimensional data like hyperspectral images is suboptimal, limiting its generalization and discriminative capabilities. We propose a highly innovative hybrid architecture integrating a Convolutional Triplet Attention Module (CTAM) to capture cross-dimensional spectral-spatial dependencies and a Histopathology-Guided Voting Mechanism (HVM) to incorporate WHO diagnostic criteria. The results demonstrate that our model can accurately differentiate and localize low-grade and high-grade serous ovarian cancer tissues, with an accuracy of 92.7% and 90.2%, respectively. Furthermore, our performance exceeds the pathologist's consensus (85.4%) and surpasses state-of-the-art models (e.g., U-Net, PAN, FPN) by a significant margin of over 20% in LGSC classification, rigorously validating its scientific superiority.</p>\",\"PeriodicalId\":94068,\"journal\":{\"name\":\"Journal of biophotonics\",\"volume\":\" \",\"pages\":\"e202500142\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of biophotonics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/jbio.202500142\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of biophotonics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/jbio.202500142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DeepLabV3+ With Convolutional Triplet Attention and Histopathology-Guided Voting for Hyperspectral Image Segmentation of Serous Ovarian Cancer.
Deep learning has been extensively applied in medical image analysis, providing healthcare professionals with more efficient and accurate diagnostic information. Among these advanced semantic segmentation models, the baseline DeepLabV3+ model is more adept at processing low-dimensional data such as RGB images, but its performance on high-dimensional data like hyperspectral images is suboptimal, limiting its generalization and discriminative capabilities. We propose a highly innovative hybrid architecture integrating a Convolutional Triplet Attention Module (CTAM) to capture cross-dimensional spectral-spatial dependencies and a Histopathology-Guided Voting Mechanism (HVM) to incorporate WHO diagnostic criteria. The results demonstrate that our model can accurately differentiate and localize low-grade and high-grade serous ovarian cancer tissues, with an accuracy of 92.7% and 90.2%, respectively. Furthermore, our performance exceeds the pathologist's consensus (85.4%) and surpasses state-of-the-art models (e.g., U-Net, PAN, FPN) by a significant margin of over 20% in LGSC classification, rigorously validating its scientific superiority.