{"title":"From Body Parts to Holistic Action: A Fine-Grained Teacher-Student CLIP for Action Recognition","authors":"Yangjun Ou;Xiao Shi;Jia Chen;Ruhan He;Chi Liu","doi":"10.1109/LSP.2025.3548448","DOIUrl":null,"url":null,"abstract":"Action recognition in dynamic video remains challenging, particularly when distinguishing between visually similar actions. While existing methods often rely on holistic representations, they overlook the fine-grained details that are significant for accurate classification. We propose a novel Fine-grained Teacher-student CLIP (FT-CLIP) that integrates body part analysis with holistic action recognition through a teacher-student architecture, bridging the gap between fine-grained action parsing and overall action understanding. The teacher model processes individual body parts alongside specialized description to generate part-specific features, which are then aggregated and distilled into the student model. Through knowledge distillation with learnable prompts, the student model effectively learns to capture subtle action distinctions while maintaining efficient inference. FT-CLIP achieves a more nuanced understanding of complex actions by progressing from detailed body part analysis to comprehensive action recognition. Experiments on Kinetics-TPS under a fully-supervised setting and on HMDB51 and UCF101 under a zero-shot setting demonstrate the effectiveness of our method.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1336-1340"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10910152/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Action recognition in dynamic video remains challenging, particularly when distinguishing between visually similar actions. While existing methods often rely on holistic representations, they overlook the fine-grained details that are significant for accurate classification. We propose a novel Fine-grained Teacher-student CLIP (FT-CLIP) that integrates body part analysis with holistic action recognition through a teacher-student architecture, bridging the gap between fine-grained action parsing and overall action understanding. The teacher model processes individual body parts alongside specialized description to generate part-specific features, which are then aggregated and distilled into the student model. Through knowledge distillation with learnable prompts, the student model effectively learns to capture subtle action distinctions while maintaining efficient inference. FT-CLIP achieves a more nuanced understanding of complex actions by progressing from detailed body part analysis to comprehensive action recognition. Experiments on Kinetics-TPS under a fully-supervised setting and on HMDB51 and UCF101 under a zero-shot setting demonstrate the effectiveness of our method.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.