IEEE Transactions on Multimedia最新文献

筛选
英文 中文
Video Instance Segmentation Without Using Mask and Identity Supervision 不使用掩码和身份监督的视频实例分割
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521668
Ge Li;Jiale Cao;Hanqing Sun;Rao Muhammad Anwer;Jin Xie;Fahad Khan;Yanwei Pang
{"title":"Video Instance Segmentation Without Using Mask and Identity Supervision","authors":"Ge Li;Jiale Cao;Hanqing Sun;Rao Muhammad Anwer;Jin Xie;Fahad Khan;Yanwei Pang","doi":"10.1109/TMM.2024.3521668","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521668","url":null,"abstract":"Video instance segmentation (VIS) is a challenging vision problem in which the task is to simultaneously detect, segment, and track all the object instances in a video. Most existing VIS approaches rely on pixel-level mask supervision within a frame as well as instance-level identity annotation across frames. However, obtaining these ‘mask and identity’ annotations is time-consuming and expensive. We propose the first mask-identity-free VIS framework that neither utilizes mask annotations nor requires identity supervision. Accordingly, we introduce a query contrast and exchange network (QCEN) comprising instance query contrast and query-exchanged mask learning. The instance query contrast first performs cross-frame instance matching and then conducts query feature contrastive learning. The query-exchanged mask learning exploits both intra-video and inter-video query exchange properties: exchanging queries of an identical instance from different frames within a video results in consistent instance masks, whereas exchanging queries across videos results in all-zero background masks. Extensive experiments on three benchmarks (YouTube-VIS 2019, YouTube-VIS 2021, and OVIS) reveal the merits of the proposed approach, which significantly reduces the performance gap between the identify-free baseline and our mask-identify-free VIS method. On the YouTube-VIS 2019 validation set, our mask-identity-free approach achieves 91.4% of the stronger-supervision-based baseline performance when utilizing the same ImageNet pre-trained model.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"224-235"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Perspective Pseudo-Label Generation and Confidence-Weighted Training for Semi-Supervised Semantic Segmentation 半监督语义分割的多视角伪标签生成及置信度加权训练
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521801
Kai Hu;Xiaobo Chen;Zhineng Chen;Yuan Zhang;Xieping Gao
{"title":"Multi-Perspective Pseudo-Label Generation and Confidence-Weighted Training for Semi-Supervised Semantic Segmentation","authors":"Kai Hu;Xiaobo Chen;Zhineng Chen;Yuan Zhang;Xieping Gao","doi":"10.1109/TMM.2024.3521801","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521801","url":null,"abstract":"Self-training has been shown to achieve remarkable gains in semi-supervised semantic segmentation by creating pseudo-labels using unlabeled data. This approach, however, suffers from the quality of the generated pseudo-labels, and generating higher quality pseudo-labels is the main challenge that needs to be addressed. In this paper, we propose a novel method for semi-supervised semantic segmentation based on Multi-perspective pseudo-label Generation and Confidence-weighted Training (MGCT). First, we present a multi-perspective pseudo-label generation strategy that considers both global and local semantic perspectives. This strategy prioritizes pixels in all images by the global and local predictions, and subsequently generates pseudo-labels for different pixels in stages according to the ranking results. Our pseudo-label generation method shows superior suitability for semi-supervised semantic segmentation compared to other approaches. Second, we propose a confidence-weighted training method to alleviate performance degradation caused by unstable pixels. Our training method assigns confident weights to unstable pixels, which reduces the interference of unstable pixels during training and facilitates the efficient training of the model. Finally, we validate our approach on the PASCAL VOC 2012 and Cityscapes datasets, and the results indicate that we achieve new state-of-the-art performance on both datasets in all settings.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"300-311"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Generalizable Occlusion Modeling for Neural Human Radiance Field
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521787
Bingzheng Liu;Jianjun Lei;Bo Peng;Zhe Zhang;Jie Zhu;Qingming Huang
{"title":"Advancing Generalizable Occlusion Modeling for Neural Human Radiance Field","authors":"Bingzheng Liu;Jianjun Lei;Bo Peng;Zhe Zhang;Jie Zhu;Qingming Huang","doi":"10.1109/TMM.2024.3521787","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521787","url":null,"abstract":"Generalizable human neural rendering aims to render the target views of the human body by leveraging source views and the skinned multi-person linear (SMPL) model. Despite exhibiting promising performance, the target views rendered by previous methods usually contain corrupted parts of the human body. Two primary challenges hinder high-quality human neural rendering. These challenges involve non-correspondences between 2D pixels and 3D SMPL vertices induced by self-occlusion of the human body and erroneous appearance predictions caused by occlusion between the source and target views. To solve these two challenges, we propose an advancing generalizable occlusion modeling method for the neural human radiance field, in which the hurdles from the self-occlusion of the human body and the occlusion between source and target views are explored and solved. Specifically, to alleviate the non-correspondence problem induced by self-occlusion, a geometry perception module is designed to obtain 3D geometric representations of SMPL vertices, enabling the prediction of accurate density values. Furthermore, a visibility aggregation module is designed to estimate the visibility maps with respect to different source views by utilizing the predicted density. Then, the complementary information among multiple source views is integrated with the support of the visibility maps in the visibility aggregation module, thus effectively addressing the occlusion between views. Experiments on the ZJU-MoCap and THUman datasets show that the proposed method achieves promising performance compared with the existing state-of-the-art methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1362-1373"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty Guided Progressive Few-Shot Learning Perception for Aerial View Synthesis
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521727
Zihan Gao;Lingling Li;Xu Liu;Licheng Jiao;Fang Liu;Shuyuan Yang
{"title":"Uncertainty Guided Progressive Few-Shot Learning Perception for Aerial View Synthesis","authors":"Zihan Gao;Lingling Li;Xu Liu;Licheng Jiao;Fang Liu;Shuyuan Yang","doi":"10.1109/TMM.2024.3521727","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521727","url":null,"abstract":"View synthesis of aerial scenes has gained attention in the recent development of applications such as urban planning, navigation, and disaster assessment. This development is closely connected to the recent advancement of the Neural Radiance Field (NeRF). However, when autonomousaerial vehicles(AAVs) encounter constraints such as limited perspectives or energy limitations, NeRF degrades with sparsely sampled views in complex aerial scenes. On this basis, we aim to solve this problem in a few-shot manner. In this paper, we propose Uncertainty Guided Perception NeRF (UPNeRF), an uncertainty-guided perceptual learning framework that focuses on applying and improving NeRF in few-shot aerial view synthesis (FSAVS). First, simply optimizing NeRF in complex aerial scenes with sparse input can lead to overfitting in training views, resulting in a collapsed model. To address this, we propose a progressive learning strategy that utilizes the uncertainty present in sparsely sampled views, enabling a gradual transition from easy to hard learning. Second, to take advantage of the inherent inductive bias in the data, we introduce an uncertainty-aware discriminator. This discriminator leverages convolutional capabilities to capture intricate patterns in the rendered patches associated with uncertainty. Third, direct optimization of NeRF lacks prior knowledge of the scene. This, coupled with a reduction in training views, can result in unrealistic rendering. To overcome this, we present a perceptual regularizer that incorporates prior knowledge through prompt tuning of a self-supervised pre-trained vision transformer. In addition, we adopt a sampled scene annealing strategy to enhance training stability. Finally, we conducted experiments with two public datasets, and the positive results indicate our method is effective.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1177-1192"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augment One With Others: Generalizing to Unforeseen Variations for Visual Tracking 用他人增强自己:归纳视觉跟踪中不可预见的变化
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI: 10.1109/TMM.2024.3521842
Jinpu Zhang;Ziwen Li;Ruonan Wei;Yuehuan Wang
{"title":"Augment One With Others: Generalizing to Unforeseen Variations for Visual Tracking","authors":"Jinpu Zhang;Ziwen Li;Ruonan Wei;Yuehuan Wang","doi":"10.1109/TMM.2024.3521842","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521842","url":null,"abstract":"Unforeseen appearance variation is a challenging factor for visual tracking. This paper provides a novel solution from semantic data augmentation, which facilitates offline training of trackers for better generalization. We utilize existing samples to obtain knowledge to augment another in terms of diversity and hardness. First, we propose that the similarity matching space in Siamese-like models has class-agnostic transferability. Based on this, we design the Latent Augmentation (LaAug) to transfer relevant variations and suppress irrelevant ones between training similarity embeddings of different classes. Thus the model can generalize across a more diverse semantic distribution. Then, we propose the Semantic Interaction Mix (SIMix), which interacts moments between different feature samples to contaminate structure and texture attributes and retain other semantic attributes. SIMix simulates the occlusion and complements the training distribution with hard cases. The mixed features with adversarial perturbations can empirically enable the model against external environmental disturbances. Experiments on six challenging benchmarks demonstrate that three representative tracking models, i.e., SiamBAN, TransT and OSTrack, can be consistently improved by incorporating the proposed methods without extra parameters and inference cost.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1461-1474"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Knowledge Distillation From Different Levels of Teachers for Online Action Detection
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI: 10.1109/TMM.2024.3521772
Md Moniruzzaman;Zhaozheng Yin
{"title":"Progressive Knowledge Distillation From Different Levels of Teachers for Online Action Detection","authors":"Md Moniruzzaman;Zhaozheng Yin","doi":"10.1109/TMM.2024.3521772","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521772","url":null,"abstract":"In this paper, we explore the problem of Online Action Detection (OAD), where the task is to detect ongoing actions from streaming videos without access to video frames in the future. Existing methods achieve good detection performance by capturing long-range temporal structures. However, a major challenge of this task is to detect actions at a specific time that arrive with insufficient observations. In this work, we utilize the additional future frames available at the training phase and propose a novel Knowledge Distillation (KD) framework for OAD, where a teacher network looks at more frames from the future and the student network distills the knowledge from the teacher for detecting ongoing actions from the observation up to the current frames. Usually, the conventional KD regards a high-level teacher network (i.e., the network after the last training iteration) to guide the student network throughout all training iterations, which may result in poor distillation due to the large knowledge gap between the high-level teacher and the student network at early training iterations. To remedy this, we propose a novel progressive knowledge distillation from different levels of teachers (PKD-DLT) for OAD, where in addition to a high-level teacher, we also generate several low- and middle-level teachers, and progressively transfer the knowledge (in the order of low- to high-level) to the student network throughout training iterations, for effective distillation. Evaluated on two challenging datasets THUMOS14 and TVSeries, we validate that our PKD-DLT is an effective teacher-student learning paradigm, which can be a plug-in to improve the performance of the existing OAD models and achieve a state-of-the-art.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1526-1537"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain Generalization HCVP:利用层次对比视觉提示实现领域泛化
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI: 10.1109/TMM.2024.3521719
Guanglin Zhou;Zhongyi Han;Shiming Chen;Biwei Huang;Liming Zhu;Tongliang Liu;Lina Yao;Kun Zhang
{"title":"HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain Generalization","authors":"Guanglin Zhou;Zhongyi Han;Shiming Chen;Biwei Huang;Liming Zhu;Tongliang Liu;Lina Yao;Kun Zhang","doi":"10.1109/TMM.2024.3521719","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521719","url":null,"abstract":"Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features. In DG, the prevalent practice of constraining models to a fixed structure or uniform parameterization to encapsulate invariant features can inadvertently blend specific aspects. Such an approach struggles with nuanced differentiation of inter-domain variations and may exhibit bias towards certain domains, hindering the precise learning of domain-invariant features. Recognizing this, we introduce a novel method designed to supplement the model with domain-level and task-specific characteristics. This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization. Building on the emerging trend of visual prompts in the DG paradigm, our work introduces the novel <bold>H</b>ierarchical <bold>C</b>ontrastive <bold>V</b>isual <bold>P</b>rompt (HCVP) methodology. This represents a significant advancement in the field, setting itself apart with a unique generative approach to prompts, alongside an explicit model structure and specialized loss functions. Differing from traditional visual prompts that are often shared across entire datasets, HCVP utilizes a hierarchical prompt generation network enhanced by prompt contrastive learning. These generative prompts are instance-dependent, catering to the unique characteristics inherent to different domains and tasks. Additionally, we devise a prompt modulation network that serves as a bridge, effectively incorporating the generated visual prompts into the vision transformer backbone. Experiments conducted on five DG datasets demonstrate the effectiveness of HCVP, outperforming both established DG algorithms and adaptation protocols.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1142-1152"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Primary Code Guided Targeted Attack against Cross-modal Hashing Retrieval 针对跨模态哈希检索的主代码引导目标攻击
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI: 10.1109/TMM.2024.3521697
Xinru Guo;Huaxiang Zhang;Li Liu;Dongmei Liu;Xu Lu;Hui Meng
{"title":"Primary Code Guided Targeted Attack against Cross-modal Hashing Retrieval","authors":"Xinru Guo;Huaxiang Zhang;Li Liu;Dongmei Liu;Xu Lu;Hui Meng","doi":"10.1109/TMM.2024.3521697","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521697","url":null,"abstract":"Deep hashing algorithms have demonstrated considerable success in recent years, particularly in cross-modal retrieval tasks. Although hash-based cross-modal retrieval methods have demonstrated considerable efficacy, the vulnerability of deep networks to adversarial examples represents a significant challenge for the hash retrieval. In the absence of target semantics, previous non-targeted attack methods attempt to attack depth models by adding disturbance to the input data, yielding some positive outcomes. Nevertheless, they still lack specific instance-level hash codes and fail to consider the diversity and semantic association of different modalities, which is insufficient to meet the attacker's expectations. In response, we present a novel Primary code Guided Targeted Attack (PGTA) against cross-modal hashing retrieval. Specifically, we integrate cross-modal instances and labels to obtain well-fused target semantics, thereby enhancing cross-modal interaction. Secondly, the primary code is designed to generate discriminable information with fine-grained semantics for target labels. Benign samples and target semantics collectively generate adversarial examples under the guidance of primary codes, thereby enhancing the efficacy of targeted attacks. Extensive experiments demonstrate that our PGTA outperforms the most advanced methods on three datasets, achieving State-of-the-Art targeted attack performance.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"312-326"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PointAttention: Rethinking Feature Representation and Propagation in Point Cloud 点关注:对点云特征表示与传播的再思考
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI: 10.1109/TMM.2024.3521745
Shichao Zhang;Yibo Ding;Tianxiang Huo;Shukai Duan;Lidan Wang
{"title":"PointAttention: Rethinking Feature Representation and Propagation in Point Cloud","authors":"Shichao Zhang;Yibo Ding;Tianxiang Huo;Shukai Duan;Lidan Wang","doi":"10.1109/TMM.2024.3521745","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521745","url":null,"abstract":"Self-attention mechanisms have revolutionized natural language processing and computer vision. However, in point cloud analysis, most existing methods focus on point convolution operators for feature extraction, but fail to model long-range and hierarchical dependencies. To overcome above issues, in this paper, we present PointAttention, a novel network for point cloud feature representation and propagation. Specifically, this architecture uses a two-stage Learnable Self-attention for long-range attention weights learning, which is more effective than conventional triple attention. Furthermore, it employs a Hierarchical Learnable Attention Mechanism to formulate momentous global prior representation and perform fine-grained context understanding, which enables our framework to break through the limitation of the receptive field and reduce the loss of contexts. Interestingly, we show that the proposed Learnable Self-attention is equivalent to the coupling of two Softmax attention operations while having lower complexity. Extensive experiments demonstrate that our network achieves highly competitive performance on several challenging publicly available benchmarks, including point cloud classification on ScanObjectNN and ModelNet40, and part segmentation on ShapeNet-Part.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"327-339"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Pitfall: Exploring the Effectiveness of Adaptation in Skeleton-Based Action Recognition 自适应陷阱:探索基于骨架的动作识别中自适应的有效性
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-24 DOI: 10.1109/TMM.2024.3521774
Qiguang Miao;Wentian Xin;Ruyi Liu;Yi Liu;Mengyao Wu;Cheng Shi;Chi-Man Pun
{"title":"Adaptive Pitfall: Exploring the Effectiveness of Adaptation in Skeleton-Based Action Recognition","authors":"Qiguang Miao;Wentian Xin;Ruyi Liu;Yi Liu;Mengyao Wu;Cheng Shi;Chi-Man Pun","doi":"10.1109/TMM.2024.3521774","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521774","url":null,"abstract":"Graph convolution networks (GCNs) have achieved remarkable performance in skeleton-based action recognition by exploiting the adjacency topology of body representation. However, the adaptive strategy adopted by the previous methods to construct the adjacency matrix is not balanced between the performance and the computational cost. We assume this concept of <italic>Adaptive Trap</i>, which can be replaced by multiple autonomous submodules, thereby simultaneously enhancing the dynamic joint representation and effectively reducing network resources. To effectuate the substitution of the adaptive model, we unveil two distinct strategies, both yielding comparable effects. (1) Optimization. <italic>Individuality and Commonality GCNs (IC-GCNs)</i> is proposed to specifically optimize the construction method of the associativity adjacency matrix for adaptive processing. The uniqueness and co-occurrence between different joint points and frames in the skeleton topology are effectively captured through methodologies like preferential fusion of physical information, extreme compression of multi-dimensional channels, and simplification of self-attention mechanism. (2) Replacement. <italic>Auto-Learning GCNs (AL-GCNs)</i> is proposed to boldly remove popular adaptive modules and cleverly utilize human key points as motion compensation to provide dynamic correlation support. AL-GCNs construct a fully learnable group adjacency matrix in both spatial and temporal dimensions, resulting in an elegant and efficient GCN-based model. In addition, three effective tricks for skeleton-based action recognition (Skip-Block, Bayesian Weight Selection Algorithm, and Simplified Dimensional Attention) are exposed and analyzed in this paper. Finally, we employ the variable channel and grouping method to explore the hardware resource bound of the two proposed models. IC-GCN and AL-GCN exhibit impressive performance across NTU-RGB+D 60, NTU-RGB+D 120, NW-UCLA, and UAV-Human datasets, with an exceptional parameter-cost ratio.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"56-71"},"PeriodicalIF":8.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信