Enhancing Class-semantics Features' Locating Performance for Temporal Action Localization

Jianming Zhang, Jianqin Yin
{"title":"Enhancing Class-semantics Features' Locating Performance for Temporal Action Localization","authors":"Jianming Zhang, Jianqin Yin","doi":"10.1109/IC-NIDC54101.2021.9660459","DOIUrl":null,"url":null,"abstract":"Temporal action localization is a fundamental video understanding task. Meanwhile, due to the complex video background, the varied duration and amplitude of the actions, it is also a considerable challenge. Currently, offline class-semantics representation is the mainstream input of this task since untrimmed videos occupy a large memory, high-quality untrimmed videos and annotations are difficult to access. Because these representations only focus on the class-semantics information, they are sub-optimal for the temporal action localization tasks. At the same time, the exploration of localization-semantics representation is very few due to the high resource consumption. Therefore, it is necessary to improve the detection capability of class-semantics representation directly. As an exploration, we propose the ForeBack module to enhance class-semantics features’ locating performance by augmenting the distinction modeling between foreground and background clips. This module could also eliminate part of the noise of inference probability sequences. Furthermore, we use phased training to learn and use the ForeBack module more effectively. Finally, we reveal the effectiveness of our approach by conduct experiments on THUMOS-14 and the mAP at tIoU@0.5 is improved from 38.8% (BMN action detection baseline) to 47.1%.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Temporal action localization is a fundamental video understanding task. Meanwhile, due to the complex video background, the varied duration and amplitude of the actions, it is also a considerable challenge. Currently, offline class-semantics representation is the mainstream input of this task since untrimmed videos occupy a large memory, high-quality untrimmed videos and annotations are difficult to access. Because these representations only focus on the class-semantics information, they are sub-optimal for the temporal action localization tasks. At the same time, the exploration of localization-semantics representation is very few due to the high resource consumption. Therefore, it is necessary to improve the detection capability of class-semantics representation directly. As an exploration, we propose the ForeBack module to enhance class-semantics features’ locating performance by augmenting the distinction modeling between foreground and background clips. This module could also eliminate part of the noise of inference probability sequences. Furthermore, we use phased training to learn and use the ForeBack module more effectively. Finally, we reveal the effectiveness of our approach by conduct experiments on THUMOS-14 and the mAP at tIoU@0.5 is improved from 38.8% (BMN action detection baseline) to 47.1%.
增强类语义特征在时间动作定位中的定位性能
时间动作定位是视频理解的一项基本任务。同时,由于视频背景的复杂,动作的持续时间和幅度的变化,这也是一个相当大的挑战。目前,离线类语义表示是该任务的主流输入,因为未经修剪的视频占用大量内存,高质量的未经修剪的视频和注释难以访问。由于这些表示只关注类语义信息,因此对于临时操作本地化任务来说,它们不是最优的。同时,由于资源消耗大,对定位语义表示的探索很少。因此,有必要直接提高类语义表示的检测能力。作为一项探索,我们提出了ForeBack模块,通过增强前景和背景剪辑之间的区分建模来增强类语义特征的定位性能。该模块还可以消除部分推理概率序列的噪声。此外,我们使用分阶段训练来更有效地学习和使用ForeBack模块。最后,通过在THUMOS-14上进行实验,我们证明了该方法的有效性,并将tIoU@0.5上的mAP从38.8% (BMN动作检测基线)提高到47.1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信