Jinting Liu, Minggang Gan, Yuxuan He, Jia Guo, Kang Hu
{"title":"Multimodal multilevel attention for semi-supervised skeleton-based gesture recognition","authors":"Jinting Liu, Minggang Gan, Yuxuan He, Jia Guo, Kang Hu","doi":"10.1007/s40747-025-01807-x","DOIUrl":null,"url":null,"abstract":"<p>Although skeleton-based gesture recognition using supervised learning has achieved promising results, the reliance on extensive annotated data poses significant costs. This paper addresses the challenge of semi-supervised skeleton-based gesture recognition, to effectively learn feature representations from labeled and unlabeled data. To resolve this problem, we propose a novel multimodal multilevel attention network designed for semi-supervised learning. This model utilizes the self-attention mechanism to polymerize multimodal and multilevel complementary semantic information of the hand skeleton, designing a multimodal multilevel contrastive loss to measure feature similarity. Specifically, our method explores the relationships between joint, bone, and motion to learn more discriminative feature representations. Considering the hierarchy of the hand skeleton, the skeleton data is divided into multilevel to capture complementary semantic information. Furthermore, the multimodal contrastive loss measures similarity among these multilevel representations. The proposed method demonstrates improved performance in semi-supervised skeleton-based gesture recognition tasks, as evidenced by experiments on the SHREC-17 and DHG 14/28 datasets.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"25 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01807-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Although skeleton-based gesture recognition using supervised learning has achieved promising results, the reliance on extensive annotated data poses significant costs. This paper addresses the challenge of semi-supervised skeleton-based gesture recognition, to effectively learn feature representations from labeled and unlabeled data. To resolve this problem, we propose a novel multimodal multilevel attention network designed for semi-supervised learning. This model utilizes the self-attention mechanism to polymerize multimodal and multilevel complementary semantic information of the hand skeleton, designing a multimodal multilevel contrastive loss to measure feature similarity. Specifically, our method explores the relationships between joint, bone, and motion to learn more discriminative feature representations. Considering the hierarchy of the hand skeleton, the skeleton data is divided into multilevel to capture complementary semantic information. Furthermore, the multimodal contrastive loss measures similarity among these multilevel representations. The proposed method demonstrates improved performance in semi-supervised skeleton-based gesture recognition tasks, as evidenced by experiments on the SHREC-17 and DHG 14/28 datasets.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.