Semantic-aware Video Representation for Few-shot Action Recognition.

Yutao Tang, Benjamín Béjar, René Vidal
{"title":"Semantic-aware Video Representation for Few-shot Action Recognition.","authors":"Yutao Tang, Benjamín Béjar, René Vidal","doi":"10.1109/wacv57701.2024.00633","DOIUrl":null,"url":null,"abstract":"<p><p>Recent work on action recognition leverages 3D features and textual information to achieve state-of-the-art performance. However, most of the current few-shot action recognition methods still rely on 2D frame-level representations, often require additional components to model temporal relations, and employ complex distance functions to achieve accurate alignment of these representations. In addition, existing methods struggle to effectively integrate textual semantics, some resorting to concatenation or addition of textual and visual features, and some using text merely as an additional supervision without truly achieving feature fusion and information transfer from different modalities. In this work, we propose a simple yet effective <b>S</b>emantic-<b>A</b>ware <b>F</b>ew-<b>S</b>hot <b>A</b>ction <b>R</b>ecognition (<b>SAFSAR</b>) model to address these issues. We show that directly leveraging a 3D feature extractor combined with an effective feature-fusion scheme, and a simple cosine similarity for classification can yield better performance without the need of extra components for temporal modeling or complex distance functions. We introduce an innovative scheme to encode the textual semantics into the video representation which adaptively fuses features from text and video, and encourages the visual encoder to extract more semantically consistent features. In this scheme, SAFSAR achieves alignment and fusion in a compact way. Experiments on five challenging few-shot action recognition benchmarks under various settings demonstrate that the proposed SAFSAR model significantly improves the state-of-the-art performance.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11337110/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/wacv57701.2024.00633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/9 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent work on action recognition leverages 3D features and textual information to achieve state-of-the-art performance. However, most of the current few-shot action recognition methods still rely on 2D frame-level representations, often require additional components to model temporal relations, and employ complex distance functions to achieve accurate alignment of these representations. In addition, existing methods struggle to effectively integrate textual semantics, some resorting to concatenation or addition of textual and visual features, and some using text merely as an additional supervision without truly achieving feature fusion and information transfer from different modalities. In this work, we propose a simple yet effective Semantic-Aware Few-Shot Action Recognition (SAFSAR) model to address these issues. We show that directly leveraging a 3D feature extractor combined with an effective feature-fusion scheme, and a simple cosine similarity for classification can yield better performance without the need of extra components for temporal modeling or complex distance functions. We introduce an innovative scheme to encode the textual semantics into the video representation which adaptively fuses features from text and video, and encourages the visual encoder to extract more semantically consistent features. In this scheme, SAFSAR achieves alignment and fusion in a compact way. Experiments on five challenging few-shot action recognition benchmarks under various settings demonstrate that the proposed SAFSAR model significantly improves the state-of-the-art performance.

语义感知视频表示法,用于少镜头动作识别
最近的动作识别工作利用三维特征和文本信息实现了最先进的性能。然而,目前大多数的几帧动作识别方法仍然依赖于二维帧级表示,通常需要额外的组件来模拟时间关系,并采用复杂的距离函数来实现这些表示的精确对齐。此外,现有的方法很难有效地整合文本语义,有的方法只是将文本特征和视觉特征合并或添加,有的方法只是将文本作为一种额外的监督手段,而没有真正实现不同模态的特征融合和信息传递。在这项工作中,我们提出了一种简单而有效的语义感知少镜头动作识别(Semantic-Aware Few-Shot Action Recognition,SAFSAR)模型来解决这些问题。我们的研究表明,直接利用三维特征提取器,结合有效的特征融合方案和简单的余弦相似性进行分类,可以获得更好的性能,而无需额外的时间建模组件或复杂的距离函数。我们引入了一种将文本语义编码到视频表示中的创新方案,它能自适应地融合文本和视频中的特征,并鼓励视觉编码器提取更多语义一致的特征。在这一方案中,SAFSAR 以一种紧凑的方式实现了对齐和融合。在不同设置下对五个具有挑战性的少镜头动作识别基准进行的实验表明,所提出的 SAFSAR 模型显著提高了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信