基于交叉特征聚合的紧凑卷积转换器用于手势识别

IF 4.9 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Satya Narayan , Praful Hambarde , Santosh Kumar Vipparthi , Arka Prokash Mazumdar , Subrahmanyam Murala
{"title":"基于交叉特征聚合的紧凑卷积转换器用于手势识别","authors":"Satya Narayan ,&nbsp;Praful Hambarde ,&nbsp;Santosh Kumar Vipparthi ,&nbsp;Arka Prokash Mazumdar ,&nbsp;Subrahmanyam Murala","doi":"10.1016/j.compeleceng.2025.110727","DOIUrl":null,"url":null,"abstract":"<div><div>Hand Gesture Recognition (HGR) plays a crucial role in intuitive human–computer interaction but continues to face challenges such as complex backgrounds, lighting variations, occlusions, and limited training data. To overcome these issues, we propose a Cross Feature Aggregation Compact Convolution Transformer (CrFe-CCT) that integrates multiscale convolutional features with a lightweight transformer architecture. In the proposed CrFe-CCT network, includes the multi-scale Cross Feature Aggregation (CrFe) and CCT modules. The CrFe module help to enhances feature robustness by fusing contextual information across scales, leading to improved recognition accuracy while maintaining low computational complexity. Also, CCT module help to preserve local spatial relationships. Unlike conventional transformers that rely on large-scale data, CrFe-CCT enables efficient learning on both small and large datasets. The experimental results demonstrate that the proposed CrFe-CCT outperforms existing state-of-the-art approaches on subject-dependent datasets, achieving accuracies of 91.95%(HGR-1), 97.70% (MUGD Set1), 95.50% (MUGD Set2), 99.06%(MUGD Set3), 99.82% (NUS-II), 99.90% (ASL-Finger Spelling (FS), and 96.80% (OUHands). On subject-independent datasets, the CrFe-CCT network achieves 40.43%(HGR-1), 85.11% (MUGD), 70.34 (NUS-II Dataset), 82.20% (ASL-Finger Spelling (FS)), respectively. Furthermore, it demonstrates superior efficiency with parameters, memory usage, FLOPs, inference time, and a throughput of images for real-world HGR applications.</div><div>The source code is available at <span><span>https://github.com/satyantazi/CrFe-CCT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"128 ","pages":"Article 110727"},"PeriodicalIF":4.9000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Compact convolution transformer with cross-feature aggregation for hand-gesture recognition\",\"authors\":\"Satya Narayan ,&nbsp;Praful Hambarde ,&nbsp;Santosh Kumar Vipparthi ,&nbsp;Arka Prokash Mazumdar ,&nbsp;Subrahmanyam Murala\",\"doi\":\"10.1016/j.compeleceng.2025.110727\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Hand Gesture Recognition (HGR) plays a crucial role in intuitive human–computer interaction but continues to face challenges such as complex backgrounds, lighting variations, occlusions, and limited training data. To overcome these issues, we propose a Cross Feature Aggregation Compact Convolution Transformer (CrFe-CCT) that integrates multiscale convolutional features with a lightweight transformer architecture. In the proposed CrFe-CCT network, includes the multi-scale Cross Feature Aggregation (CrFe) and CCT modules. The CrFe module help to enhances feature robustness by fusing contextual information across scales, leading to improved recognition accuracy while maintaining low computational complexity. Also, CCT module help to preserve local spatial relationships. Unlike conventional transformers that rely on large-scale data, CrFe-CCT enables efficient learning on both small and large datasets. The experimental results demonstrate that the proposed CrFe-CCT outperforms existing state-of-the-art approaches on subject-dependent datasets, achieving accuracies of 91.95%(HGR-1), 97.70% (MUGD Set1), 95.50% (MUGD Set2), 99.06%(MUGD Set3), 99.82% (NUS-II), 99.90% (ASL-Finger Spelling (FS), and 96.80% (OUHands). On subject-independent datasets, the CrFe-CCT network achieves 40.43%(HGR-1), 85.11% (MUGD), 70.34 (NUS-II Dataset), 82.20% (ASL-Finger Spelling (FS)), respectively. Furthermore, it demonstrates superior efficiency with parameters, memory usage, FLOPs, inference time, and a throughput of images for real-world HGR applications.</div><div>The source code is available at <span><span>https://github.com/satyantazi/CrFe-CCT</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50630,\"journal\":{\"name\":\"Computers & Electrical Engineering\",\"volume\":\"128 \",\"pages\":\"Article 110727\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Electrical Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0045790625006706\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625006706","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

手势识别(HGR)在直观的人机交互中起着至关重要的作用,但仍然面临复杂背景、光照变化、遮挡和有限训练数据等挑战。为了克服这些问题,我们提出了一种交叉特征聚合紧凑型卷积变压器(CrFe-CCT),它将多尺度卷积特征与轻量级变压器架构集成在一起。在提出的CrFe-CCT网络中,包括多尺度交叉特征聚合(CrFe)和CCT模块。CrFe模块通过融合跨尺度的上下文信息来增强特征的鲁棒性,从而提高识别精度,同时保持较低的计算复杂度。此外,CCT模块有助于保持局部空间关系。与依赖大规模数据的传统变压器不同,CrFe-CCT可以在小型和大型数据集上进行有效的学习。实验结果表明,所提出的CrFe-CCT在主题相关数据集上优于现有的最先进方法,准确率分别为91.95%(HGR-1)、97.70% (MUGD Set1)、95.50% (MUGD Set2)、99.06%(MUGD Set3)、99.82% (us - ii)、99.90% (ASL-Finger Spelling (FS)和96.80% (OUHands)。在主题无关的数据集上,CrFe-CCT网络分别达到40.43%(HGR-1)、85.11% (MUGD)、70.34% (NUS-II Dataset)和82.20% (ASL-Finger Spelling (FS))。此外,它在参数、内存使用、FLOPs、推理时间和真实HGR应用程序的图像吞吐量方面展示了卓越的效率。源代码可从https://github.com/satyantazi/CrFe-CCT获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Compact convolution transformer with cross-feature aggregation for hand-gesture recognition
Hand Gesture Recognition (HGR) plays a crucial role in intuitive human–computer interaction but continues to face challenges such as complex backgrounds, lighting variations, occlusions, and limited training data. To overcome these issues, we propose a Cross Feature Aggregation Compact Convolution Transformer (CrFe-CCT) that integrates multiscale convolutional features with a lightweight transformer architecture. In the proposed CrFe-CCT network, includes the multi-scale Cross Feature Aggregation (CrFe) and CCT modules. The CrFe module help to enhances feature robustness by fusing contextual information across scales, leading to improved recognition accuracy while maintaining low computational complexity. Also, CCT module help to preserve local spatial relationships. Unlike conventional transformers that rely on large-scale data, CrFe-CCT enables efficient learning on both small and large datasets. The experimental results demonstrate that the proposed CrFe-CCT outperforms existing state-of-the-art approaches on subject-dependent datasets, achieving accuracies of 91.95%(HGR-1), 97.70% (MUGD Set1), 95.50% (MUGD Set2), 99.06%(MUGD Set3), 99.82% (NUS-II), 99.90% (ASL-Finger Spelling (FS), and 96.80% (OUHands). On subject-independent datasets, the CrFe-CCT network achieves 40.43%(HGR-1), 85.11% (MUGD), 70.34 (NUS-II Dataset), 82.20% (ASL-Finger Spelling (FS)), respectively. Furthermore, it demonstrates superior efficiency with parameters, memory usage, FLOPs, inference time, and a throughput of images for real-world HGR applications.
The source code is available at https://github.com/satyantazi/CrFe-CCT.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Electrical Engineering
Computers & Electrical Engineering 工程技术-工程:电子与电气
CiteScore
9.20
自引率
7.00%
发文量
661
审稿时长
47 days
期刊介绍: The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency. Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信