{"title":"Sparse Method Towards Temporal Action Detection","authors":"Lijuan Wang, Suguo Zhu, Wuteng Qi, Jin Yang","doi":"10.1109/ISPACS57703.2022.10082820","DOIUrl":null,"url":null,"abstract":"Temporal action detection aims to correctly predict the categories and temporal intervals of actions in an untrimmed video by using only video-level labels, which is a basic but challenging task in video understanding. Inspired by the work of Sparse R-CNN object detection, we present a purely sparse method in temporal action detection. In our method, a fixed sparse set of learnable temporal proposals, total length of $\\mathbf{N}$ (e.g.50), are provided to dynamic action interaction head to perform classification and localization. Sparse temporal action detection method completely avoids all efforts related to temporal candidates design and many- to-one label assignment. More importantly, final predictions are directly output without non-maximum suppression post-procedure. Extensive experiments show that our method achieves state-of-the-art performance for both action proposal and localization on THUMOS14 detection benchmark and competitive performance on ActivityNet-l.3challenge.","PeriodicalId":410603,"journal":{"name":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS57703.2022.10082820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Temporal action detection aims to correctly predict the categories and temporal intervals of actions in an untrimmed video by using only video-level labels, which is a basic but challenging task in video understanding. Inspired by the work of Sparse R-CNN object detection, we present a purely sparse method in temporal action detection. In our method, a fixed sparse set of learnable temporal proposals, total length of $\mathbf{N}$ (e.g.50), are provided to dynamic action interaction head to perform classification and localization. Sparse temporal action detection method completely avoids all efforts related to temporal candidates design and many- to-one label assignment. More importantly, final predictions are directly output without non-maximum suppression post-procedure. Extensive experiments show that our method achieves state-of-the-art performance for both action proposal and localization on THUMOS14 detection benchmark and competitive performance on ActivityNet-l.3challenge.