Frequency-Aware Self-Supervised Group Activity Recognition with skeleton sequences

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Guoquan Wang, Mengyuan Liu, Hong Liu, Jinyan Zhang, Peini Guo, Ruijia Fan, Siyu Chen
{"title":"Frequency-Aware Self-Supervised Group Activity Recognition with skeleton sequences","authors":"Guoquan Wang,&nbsp;Mengyuan Liu,&nbsp;Hong Liu,&nbsp;Jinyan Zhang,&nbsp;Peini Guo,&nbsp;Ruijia Fan,&nbsp;Siyu Chen","doi":"10.1016/j.patcog.2025.111710","DOIUrl":null,"url":null,"abstract":"<div><div>Self-supervised, skeleton-based techniques have recently demonstrated great potential for group activity recognition via contrastive learning. However, these methods have difficulty accommodating the dynamic and complex nature of spatio-temporal data, weakening the ability to conduct effective modeling and extract crucial features. To this end, we propose a novel <strong>F</strong>requency-<strong>A</strong>ware <strong>G</strong>roup <strong>A</strong>ctivity <strong>R</strong>ecognition (FAGAR) network, which offers a comprehensive solution by addressing three key subproblems. First, the challenge of extracting discriminative features is further exacerbated by pose estimation algorithms’ limitations under random spatio-temporal data augmentation. To mitigate this, a frequency domain passing augmentation method that emphasizes individual collaborative changes is introduced, effectively filtering out noise interference. Second, the fixed connections in traditional relation modeling networks fail to adapt to dynamic scene changes. To address this, we design an adaptive frequency domain compression network, which dynamically adjusts to scene variations. Third, the temporal modeling process often leads to a loss of focus on key features, reducing the model’s ability to assess individual contributions within a group. To resolve this, we propose an amplitude-aware loss function that guides the network in learning the relative importance of individuals, ensuring it maintains the correct learning direction. Our FAGAR achieves state-of-the-art performance on several datasets for self-supervised skeleton-based group activity recognition. Code is available at <span><span>https://github.com/WGQ109/FAGAR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"167 ","pages":"Article 111710"},"PeriodicalIF":7.5000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S003132032500370X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Self-supervised, skeleton-based techniques have recently demonstrated great potential for group activity recognition via contrastive learning. However, these methods have difficulty accommodating the dynamic and complex nature of spatio-temporal data, weakening the ability to conduct effective modeling and extract crucial features. To this end, we propose a novel Frequency-Aware Group Activity Recognition (FAGAR) network, which offers a comprehensive solution by addressing three key subproblems. First, the challenge of extracting discriminative features is further exacerbated by pose estimation algorithms’ limitations under random spatio-temporal data augmentation. To mitigate this, a frequency domain passing augmentation method that emphasizes individual collaborative changes is introduced, effectively filtering out noise interference. Second, the fixed connections in traditional relation modeling networks fail to adapt to dynamic scene changes. To address this, we design an adaptive frequency domain compression network, which dynamically adjusts to scene variations. Third, the temporal modeling process often leads to a loss of focus on key features, reducing the model’s ability to assess individual contributions within a group. To resolve this, we propose an amplitude-aware loss function that guides the network in learning the relative importance of individuals, ensuring it maintains the correct learning direction. Our FAGAR achieves state-of-the-art performance on several datasets for self-supervised skeleton-based group activity recognition. Code is available at https://github.com/WGQ109/FAGAR.
基于骨架序列的频率感知自监督群体活动识别
自我监督、基于骨架的技术最近显示出通过对比学习来识别群体活动的巨大潜力。然而,这些方法难以适应时空数据的动态性和复杂性,削弱了进行有效建模和提取关键特征的能力。为此,我们提出了一种新的频率感知群活动识别(FAGAR)网络,该网络通过解决三个关键子问题提供了一个全面的解决方案。首先,在随机时空数据增强下,位姿估计算法的局限性进一步加剧了提取判别特征的挑战。为了缓解这种情况,引入了一种强调个体协同变化的频域通过增强方法,有效地滤除了噪声干扰。其次,传统关系建模网络中的固定连接不能适应场景的动态变化。为了解决这个问题,我们设计了一个自适应频域压缩网络,它可以动态地适应场景的变化。第三,时间建模过程经常导致对关键特征的关注缺失,降低了模型评估组内个体贡献的能力。为了解决这个问题,我们提出了一个幅度感知损失函数,指导网络学习个体的相对重要性,确保它保持正确的学习方向。我们的FAGAR在几个数据集上实现了最先进的性能,用于自我监督的基于骨骼的群体活动识别。代码可从https://github.com/WGQ109/FAGAR获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信