{"title":"基于三维剩余注意和交叉熵的动作识别。","authors":"Yuhao Ouyang, Xiangqian Li","doi":"10.3390/e27040368","DOIUrl":null,"url":null,"abstract":"<p><p>This study proposes a three-dimensional (3D) residual attention network (3DRFNet) for human activity recognition by learning spatiotemporal representations from motion pictures. Core innovation integrates the attention mechanism into the 3D ResNet framework to emphasize key features and suppress irrelevant ones. In each 3D ResNet block, channel and spatial attention mechanisms generate attention maps for tensor segments, which are then multiplied by the input feature mapping to emphasize key features. Additionally, the integration of Fast Fourier Convolution (FFC) enhances the network's capability to effectively capture temporal and spatial features. Simultaneously, we used the cross-entropy loss function to describe the difference between the predicted value and GT to guide the model's backpropagation. Subsequent experimental results have demonstrated that 3DRFNet achieved SOTA performance in human action recognition. 3DRFNet achieved accuracies of 91.7% and 98.7% on the HMDB-51 and UCF-101 datasets, respectively, which highlighted 3DRFNet's advantages in recognition accuracy and robustness, particularly in effectively capturing key behavioral features in videos using both attention mechanisms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"27 4","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12025861/pdf/","citationCount":"0","resultStr":"{\"title\":\"Action Recognition with 3D Residual Attention and Cross Entropy.\",\"authors\":\"Yuhao Ouyang, Xiangqian Li\",\"doi\":\"10.3390/e27040368\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study proposes a three-dimensional (3D) residual attention network (3DRFNet) for human activity recognition by learning spatiotemporal representations from motion pictures. Core innovation integrates the attention mechanism into the 3D ResNet framework to emphasize key features and suppress irrelevant ones. In each 3D ResNet block, channel and spatial attention mechanisms generate attention maps for tensor segments, which are then multiplied by the input feature mapping to emphasize key features. Additionally, the integration of Fast Fourier Convolution (FFC) enhances the network's capability to effectively capture temporal and spatial features. Simultaneously, we used the cross-entropy loss function to describe the difference between the predicted value and GT to guide the model's backpropagation. Subsequent experimental results have demonstrated that 3DRFNet achieved SOTA performance in human action recognition. 3DRFNet achieved accuracies of 91.7% and 98.7% on the HMDB-51 and UCF-101 datasets, respectively, which highlighted 3DRFNet's advantages in recognition accuracy and robustness, particularly in effectively capturing key behavioral features in videos using both attention mechanisms.</p>\",\"PeriodicalId\":11694,\"journal\":{\"name\":\"Entropy\",\"volume\":\"27 4\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12025861/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Entropy\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.3390/e27040368\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PHYSICS, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Entropy","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.3390/e27040368","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
Action Recognition with 3D Residual Attention and Cross Entropy.
This study proposes a three-dimensional (3D) residual attention network (3DRFNet) for human activity recognition by learning spatiotemporal representations from motion pictures. Core innovation integrates the attention mechanism into the 3D ResNet framework to emphasize key features and suppress irrelevant ones. In each 3D ResNet block, channel and spatial attention mechanisms generate attention maps for tensor segments, which are then multiplied by the input feature mapping to emphasize key features. Additionally, the integration of Fast Fourier Convolution (FFC) enhances the network's capability to effectively capture temporal and spatial features. Simultaneously, we used the cross-entropy loss function to describe the difference between the predicted value and GT to guide the model's backpropagation. Subsequent experimental results have demonstrated that 3DRFNet achieved SOTA performance in human action recognition. 3DRFNet achieved accuracies of 91.7% and 98.7% on the HMDB-51 and UCF-101 datasets, respectively, which highlighted 3DRFNet's advantages in recognition accuracy and robustness, particularly in effectively capturing key behavioral features in videos using both attention mechanisms.
期刊介绍:
Entropy (ISSN 1099-4300), an international and interdisciplinary journal of entropy and information studies, publishes reviews, regular research papers and short notes. Our aim is to encourage scientists to publish as much as possible their theoretical and experimental details. There is no restriction on the length of the papers. If there are computation and the experiment, the details must be provided so that the results can be reproduced.