{"title":"Edge Guided Network With Motion Enhancement for Few-Shot Action Recognition","authors":"Kaiwen Du;Weirong Ye;Hanyu Guo;Yan Yan;Hanzi Wang","doi":"10.1109/TCSVT.2025.3533573","DOIUrl":null,"url":null,"abstract":"Existing state-of-the-art methods for few-shot action recognition (FSAR) achieve promising performance by spatial and temporal modeling. However, most current methods ignore the importance of edge information and motion cues, leading to inferior performance. For the few-shot task, it is important to effectively explore limited data. Additionally, effectively utilizing edge information is beneficial for exploring motion cues, and vice versa. In this paper, we propose a novel edge guided network with motion enhancement (EGME) for FSAR. To the best of our knowledge, this is the first work to utilize the edge information as guidance in the FSAR task. Our EGME contains two crucial components, including an edge information extractor (EIE) and a motion enhancement module (ME). Specifically, EIE is used to obtain edge information on video frames. Afterward, the edge information is used as guidance to fuse with the frame features. In addition, ME can adaptively capture motion-sensitive features of videos. It adopts a self-gating mechanism to highlight motion-sensitive regions in videos from a large temporal receptive field. Based on the above designed components, EGME can capture edge information and motion cues, resulting in superior recognition performance. Experimental results on four challenging benchmarks show that EGME performs favorably against recent advanced methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 6","pages":"5331-5342"},"PeriodicalIF":11.1000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10852273/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Existing state-of-the-art methods for few-shot action recognition (FSAR) achieve promising performance by spatial and temporal modeling. However, most current methods ignore the importance of edge information and motion cues, leading to inferior performance. For the few-shot task, it is important to effectively explore limited data. Additionally, effectively utilizing edge information is beneficial for exploring motion cues, and vice versa. In this paper, we propose a novel edge guided network with motion enhancement (EGME) for FSAR. To the best of our knowledge, this is the first work to utilize the edge information as guidance in the FSAR task. Our EGME contains two crucial components, including an edge information extractor (EIE) and a motion enhancement module (ME). Specifically, EIE is used to obtain edge information on video frames. Afterward, the edge information is used as guidance to fuse with the frame features. In addition, ME can adaptively capture motion-sensitive features of videos. It adopts a self-gating mechanism to highlight motion-sensitive regions in videos from a large temporal receptive field. Based on the above designed components, EGME can capture edge information and motion cues, resulting in superior recognition performance. Experimental results on four challenging benchmarks show that EGME performs favorably against recent advanced methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.