{"title":"A lightweight vision transformer with weighted global average pooling: implications for IoMT applications","authors":"Huiyao Dong, Igor Kotenko, Shimin Dong","doi":"10.1007/s40747-025-01842-8","DOIUrl":null,"url":null,"abstract":"<p>Vision Transformers (ViTs) have garnered significant interest for analysing medical images in Internet of Medical Things (IoMT) systems due to their ability to capture global context. However, deploying ViTs in resource-constrained IoMT environments requires addressing the challenge of adapting these computationally intensive models to meet device limitations while maintaining efficiency. To tackle this issue, we introduce LightAMViT, a lightweight attention mechanism-enhanced ViT, which incorporates K-means clustering layers to reduce the computational complexity of the self-attention matrix, along with an optimized global average pooling layer that leverages all stacked attention block outputs, each weighted by learnable parameters. Additionally, it employs an adaptive learning strategy that facilitates faster convergence by dynamically adjusting the learning rate. We evaluate the proposed technique on two medical image datasets: BUSI and ISIC2020. Our model outperforms conventional CNNs and demonstrates competitive performance compared to the original ViTs, showcasing improvements in both accuracy and computational efficiency. These findings indicate the model’s robustness and generalisation across various medical image analysis tasks, thereby enhancing the applicability of ViTs in resource-limited IoMT devices.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"23 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01842-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Vision Transformers (ViTs) have garnered significant interest for analysing medical images in Internet of Medical Things (IoMT) systems due to their ability to capture global context. However, deploying ViTs in resource-constrained IoMT environments requires addressing the challenge of adapting these computationally intensive models to meet device limitations while maintaining efficiency. To tackle this issue, we introduce LightAMViT, a lightweight attention mechanism-enhanced ViT, which incorporates K-means clustering layers to reduce the computational complexity of the self-attention matrix, along with an optimized global average pooling layer that leverages all stacked attention block outputs, each weighted by learnable parameters. Additionally, it employs an adaptive learning strategy that facilitates faster convergence by dynamically adjusting the learning rate. We evaluate the proposed technique on two medical image datasets: BUSI and ISIC2020. Our model outperforms conventional CNNs and demonstrates competitive performance compared to the original ViTs, showcasing improvements in both accuracy and computational efficiency. These findings indicate the model’s robustness and generalisation across various medical image analysis tasks, thereby enhancing the applicability of ViTs in resource-limited IoMT devices.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.