Xi Guo, Yikun Hu, Fang Chen, Yuhui Jin, Jian Qiao, Jian Huang, Qin Yang
{"title":"视频动作识别的时空可分离注意","authors":"Xi Guo, Yikun Hu, Fang Chen, Yuhui Jin, Jian Qiao, Jian Huang, Qin Yang","doi":"10.1109/FAIML57028.2022.00050","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) have been proved as a efficient method for various of visual recognition tasks. However, it is more difficult for CNNs to capture long-range spatial-temporal cues in dynamic videos than in static images. Recent nonlocal neural networks attempt to overcome this problem by a self-attention mechanism, where pair-wise affinities for all the spatial-temporal positions are calculated. However, this introduces a substantial computational burden. In this paper, we propose a spatial-temporal separable attention module (STSAM) to reduce the computational complexity. The experimental results, based on the Kinetics 400 benchmark, show that our model achieves better performance but introduces less extra FLOPs than nonlocal neural networks.","PeriodicalId":307172,"journal":{"name":"2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatial-Temporal Separable Attention for Video Action Recognition\",\"authors\":\"Xi Guo, Yikun Hu, Fang Chen, Yuhui Jin, Jian Qiao, Jian Huang, Qin Yang\",\"doi\":\"10.1109/FAIML57028.2022.00050\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural networks (CNNs) have been proved as a efficient method for various of visual recognition tasks. However, it is more difficult for CNNs to capture long-range spatial-temporal cues in dynamic videos than in static images. Recent nonlocal neural networks attempt to overcome this problem by a self-attention mechanism, where pair-wise affinities for all the spatial-temporal positions are calculated. However, this introduces a substantial computational burden. In this paper, we propose a spatial-temporal separable attention module (STSAM) to reduce the computational complexity. The experimental results, based on the Kinetics 400 benchmark, show that our model achieves better performance but introduces less extra FLOPs than nonlocal neural networks.\",\"PeriodicalId\":307172,\"journal\":{\"name\":\"2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FAIML57028.2022.00050\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Frontiers of Artificial Intelligence and Machine Learning (FAIML)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FAIML57028.2022.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Spatial-Temporal Separable Attention for Video Action Recognition
Convolutional neural networks (CNNs) have been proved as a efficient method for various of visual recognition tasks. However, it is more difficult for CNNs to capture long-range spatial-temporal cues in dynamic videos than in static images. Recent nonlocal neural networks attempt to overcome this problem by a self-attention mechanism, where pair-wise affinities for all the spatial-temporal positions are calculated. However, this introduces a substantial computational burden. In this paper, we propose a spatial-temporal separable attention module (STSAM) to reduce the computational complexity. The experimental results, based on the Kinetics 400 benchmark, show that our model achieves better performance but introduces less extra FLOPs than nonlocal neural networks.