{"title":"Enhancing Class-semantics Features' Locating Performance for Temporal Action Localization","authors":"Jianming Zhang, Jianqin Yin","doi":"10.1109/IC-NIDC54101.2021.9660459","DOIUrl":null,"url":null,"abstract":"Temporal action localization is a fundamental video understanding task. Meanwhile, due to the complex video background, the varied duration and amplitude of the actions, it is also a considerable challenge. Currently, offline class-semantics representation is the mainstream input of this task since untrimmed videos occupy a large memory, high-quality untrimmed videos and annotations are difficult to access. Because these representations only focus on the class-semantics information, they are sub-optimal for the temporal action localization tasks. At the same time, the exploration of localization-semantics representation is very few due to the high resource consumption. Therefore, it is necessary to improve the detection capability of class-semantics representation directly. As an exploration, we propose the ForeBack module to enhance class-semantics features’ locating performance by augmenting the distinction modeling between foreground and background clips. This module could also eliminate part of the noise of inference probability sequences. Furthermore, we use phased training to learn and use the ForeBack module more effectively. Finally, we reveal the effectiveness of our approach by conduct experiments on THUMOS-14 and the mAP at tIoU@0.5 is improved from 38.8% (BMN action detection baseline) to 47.1%.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Temporal action localization is a fundamental video understanding task. Meanwhile, due to the complex video background, the varied duration and amplitude of the actions, it is also a considerable challenge. Currently, offline class-semantics representation is the mainstream input of this task since untrimmed videos occupy a large memory, high-quality untrimmed videos and annotations are difficult to access. Because these representations only focus on the class-semantics information, they are sub-optimal for the temporal action localization tasks. At the same time, the exploration of localization-semantics representation is very few due to the high resource consumption. Therefore, it is necessary to improve the detection capability of class-semantics representation directly. As an exploration, we propose the ForeBack module to enhance class-semantics features’ locating performance by augmenting the distinction modeling between foreground and background clips. This module could also eliminate part of the noise of inference probability sequences. Furthermore, we use phased training to learn and use the ForeBack module more effectively. Finally, we reveal the effectiveness of our approach by conduct experiments on THUMOS-14 and the mAP at tIoU@0.5 is improved from 38.8% (BMN action detection baseline) to 47.1%.