Shanaka Ramesh Gunasekara;Wanqing Li;Jack Yang;Philip O. Ogunbona
{"title":"Asynchronous Joint-Based Temporal Pooling for Skeleton-Based Action Recognition","authors":"Shanaka Ramesh Gunasekara;Wanqing Li;Jack Yang;Philip O. Ogunbona","doi":"10.1109/TCSVT.2024.3465845","DOIUrl":null,"url":null,"abstract":"Deep neural networks for skeleton-based human action recognition (HAR) often utilize traditional averaging or maximum temporal pooling to aggregate features by treating all joints and frames equally. However, this approach can excessively aggregate less discriminative or even indiscriminative features into the final feature vectors for recognition. To address this issue, a novel method called asynchronous joint adaptive temporal pooling (AJTP) is introduced in this paper. The method aims to enhance action recognition by identifying a set of informative joints across the temporal dimension and applying a joint-based and asynchronous motion-preservative pooling rather than conventional frame-based pooling. The effectiveness of the proposed AJTP has been empirically validated by integrating it with popular Graph Convolutional Network (GCN) models on three benchmark datasets: NTU RGB+D 120, PKUMMD, and Kinetic400. The results have shown that a GCN model with AJTP substantially improves performance compared to its counterpart GCN model with conventional temporal pooling techniques. The source code is available at <uri>https://github.com/ShanakaRG/AJTP</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"357-366"},"PeriodicalIF":8.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10685538/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks for skeleton-based human action recognition (HAR) often utilize traditional averaging or maximum temporal pooling to aggregate features by treating all joints and frames equally. However, this approach can excessively aggregate less discriminative or even indiscriminative features into the final feature vectors for recognition. To address this issue, a novel method called asynchronous joint adaptive temporal pooling (AJTP) is introduced in this paper. The method aims to enhance action recognition by identifying a set of informative joints across the temporal dimension and applying a joint-based and asynchronous motion-preservative pooling rather than conventional frame-based pooling. The effectiveness of the proposed AJTP has been empirically validated by integrating it with popular Graph Convolutional Network (GCN) models on three benchmark datasets: NTU RGB+D 120, PKUMMD, and Kinetic400. The results have shown that a GCN model with AJTP substantially improves performance compared to its counterpart GCN model with conventional temporal pooling techniques. The source code is available at https://github.com/ShanakaRG/AJTP.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.