ECPNet:一种有效的基于注意力的伪三维块卷积网络,用于人体动作识别

Xiuping Bao, Jiabin Yuan, Bei Chen
{"title":"ECPNet:一种有效的基于注意力的伪三维块卷积网络,用于人体动作识别","authors":"Xiuping Bao, Jiabin Yuan, Bei Chen","doi":"10.1109/ICTAI.2019.00089","DOIUrl":null,"url":null,"abstract":"Human action recognition has became an important task in computer vision and has received a significant amount of research interests in recent years. Convolutional Neural Network (CNN) has shown its power in image recognition task. While in the field of video recognition, it is still a challenge problem. In this paper, we introduce a high-efficient attention-based convolutional network named ECPNet for video understanding. ECPNet adopts the framework that is a consecutive connection of 2D CNN and pseudo-3D CNN. The pseudo-3D means we replace the traditional 3 × 3 × 3 kernel with two 3D convolutional filters shaped 1 × 3 × 3 and 3 × 1 × 1. Our ECPNet combines the advantages of both 2D and 3D CNNs: (1) ECPNet is an end-to-end network and can learn information of appearance from images and motion between frames. (2) ECPNet requires less computing resource and lower memory consumption than many state-of-art models. (3) ECPNet is easy to expand for different demands of runtime and classification accuracy. We evaluate the proposed model on three popular video benchmarks in human action recognition task: Kinetics-mini (split of full Kinetics), UCF101 and HMDB51. Our ECPNet achieves the excellent performance on above datasets with less time cost.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ECPNet: An Efficient Attention-Based Convolution Network with Pseudo-3D Block for Human Action Recognition\",\"authors\":\"Xiuping Bao, Jiabin Yuan, Bei Chen\",\"doi\":\"10.1109/ICTAI.2019.00089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human action recognition has became an important task in computer vision and has received a significant amount of research interests in recent years. Convolutional Neural Network (CNN) has shown its power in image recognition task. While in the field of video recognition, it is still a challenge problem. In this paper, we introduce a high-efficient attention-based convolutional network named ECPNet for video understanding. ECPNet adopts the framework that is a consecutive connection of 2D CNN and pseudo-3D CNN. The pseudo-3D means we replace the traditional 3 × 3 × 3 kernel with two 3D convolutional filters shaped 1 × 3 × 3 and 3 × 1 × 1. Our ECPNet combines the advantages of both 2D and 3D CNNs: (1) ECPNet is an end-to-end network and can learn information of appearance from images and motion between frames. (2) ECPNet requires less computing resource and lower memory consumption than many state-of-art models. (3) ECPNet is easy to expand for different demands of runtime and classification accuracy. We evaluate the proposed model on three popular video benchmarks in human action recognition task: Kinetics-mini (split of full Kinetics), UCF101 and HMDB51. Our ECPNet achieves the excellent performance on above datasets with less time cost.\",\"PeriodicalId\":346657,\"journal\":{\"name\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2019.00089\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人体动作识别已成为计算机视觉领域的一项重要研究课题,近年来受到了广泛的关注。卷积神经网络(CNN)在图像识别任务中已经显示出其强大的能力。而在视频识别领域,这仍然是一个具有挑战性的问题。本文介绍了一种高效的基于注意力的卷积网络ECPNet,用于视频理解。ECPNet采用的框架是二维CNN和伪三维CNN的连续连接。伪3D意味着我们用两个形状为1 × 3 × 3和3 × 1 × 1的三维卷积滤波器取代传统的3 × 3 × 3核。我们的ECPNet结合了2D和3D cnn的优点:(1)ECPNet是一个端到端的网络,可以从图像和帧之间的运动中学习外观信息。(2)与许多最先进的模型相比,ECPNet需要更少的计算资源和更低的内存消耗。(3) ECPNet易于扩展以满足不同的运行时间和分类精度需求。我们在人类动作识别任务中三个流行的视频基准上对所提出的模型进行了评估:Kinetics-mini (full Kinetics的分裂),UCF101和HMDB51。我们的ECPNet以较少的时间成本在上述数据集上实现了优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ECPNet: An Efficient Attention-Based Convolution Network with Pseudo-3D Block for Human Action Recognition
Human action recognition has became an important task in computer vision and has received a significant amount of research interests in recent years. Convolutional Neural Network (CNN) has shown its power in image recognition task. While in the field of video recognition, it is still a challenge problem. In this paper, we introduce a high-efficient attention-based convolutional network named ECPNet for video understanding. ECPNet adopts the framework that is a consecutive connection of 2D CNN and pseudo-3D CNN. The pseudo-3D means we replace the traditional 3 × 3 × 3 kernel with two 3D convolutional filters shaped 1 × 3 × 3 and 3 × 1 × 1. Our ECPNet combines the advantages of both 2D and 3D CNNs: (1) ECPNet is an end-to-end network and can learn information of appearance from images and motion between frames. (2) ECPNet requires less computing resource and lower memory consumption than many state-of-art models. (3) ECPNet is easy to expand for different demands of runtime and classification accuracy. We evaluate the proposed model on three popular video benchmarks in human action recognition task: Kinetics-mini (split of full Kinetics), UCF101 and HMDB51. Our ECPNet achieves the excellent performance on above datasets with less time cost.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信