{"title":"用于医学图像分割的多轴视觉变压器","authors":"Abdul Rehman Khan , Asifullah Khan","doi":"10.1016/j.engappai.2025.111251","DOIUrl":null,"url":null,"abstract":"<div><div>Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have shown remarkable success in medical image segmentation, but individually, they struggle to capture both local and global contexts. To address this limitation, we propose MaxViT-UNet, a hybrid encoder–decoder architecture that integrates convolutional operations and multi-axis self-attention to capture local and global context for effective medical image segmentation. Our novel Hybrid Decoder fuses upsampled decoder features with encoder skip connections and refines them using a multi-axis attention block, repeated across decoding stages for progressive segmentation refinement. Experimental evaluation on the MoNuSeg18 and MoNuSAC20 datasets demonstrates that MaxViT-UNet outperforms traditional CNN-based U-Net by 2.36% and 14.14% Dice score, respectively. Similarly it outperforms Swin-UNet by 5.31% on MoNuSeg18 and nearly doubles the Dice score on MoNuSAC20. These results confirm the generalization and effective segmentation capabilities of our hybrid architecture across diverse histopathological datasets.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"158 ","pages":"Article 111251"},"PeriodicalIF":7.5000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-axis vision transformer for medical image segmentation\",\"authors\":\"Abdul Rehman Khan , Asifullah Khan\",\"doi\":\"10.1016/j.engappai.2025.111251\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have shown remarkable success in medical image segmentation, but individually, they struggle to capture both local and global contexts. To address this limitation, we propose MaxViT-UNet, a hybrid encoder–decoder architecture that integrates convolutional operations and multi-axis self-attention to capture local and global context for effective medical image segmentation. Our novel Hybrid Decoder fuses upsampled decoder features with encoder skip connections and refines them using a multi-axis attention block, repeated across decoding stages for progressive segmentation refinement. Experimental evaluation on the MoNuSeg18 and MoNuSAC20 datasets demonstrates that MaxViT-UNet outperforms traditional CNN-based U-Net by 2.36% and 14.14% Dice score, respectively. Similarly it outperforms Swin-UNet by 5.31% on MoNuSeg18 and nearly doubles the Dice score on MoNuSAC20. These results confirm the generalization and effective segmentation capabilities of our hybrid architecture across diverse histopathological datasets.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"158 \",\"pages\":\"Article 111251\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625012527\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625012527","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Multi-axis vision transformer for medical image segmentation
Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) have shown remarkable success in medical image segmentation, but individually, they struggle to capture both local and global contexts. To address this limitation, we propose MaxViT-UNet, a hybrid encoder–decoder architecture that integrates convolutional operations and multi-axis self-attention to capture local and global context for effective medical image segmentation. Our novel Hybrid Decoder fuses upsampled decoder features with encoder skip connections and refines them using a multi-axis attention block, repeated across decoding stages for progressive segmentation refinement. Experimental evaluation on the MoNuSeg18 and MoNuSAC20 datasets demonstrates that MaxViT-UNet outperforms traditional CNN-based U-Net by 2.36% and 14.14% Dice score, respectively. Similarly it outperforms Swin-UNet by 5.31% on MoNuSeg18 and nearly doubles the Dice score on MoNuSAC20. These results confirm the generalization and effective segmentation capabilities of our hybrid architecture across diverse histopathological datasets.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.