Fused behavior recognition model based on attention mechanism.

4区 计算机科学 Q1 Arts and Humanities
Lei Chen, Rui Liu, Dongsheng Zhou, Xin Yang, Qiang Zhang
{"title":"Fused behavior recognition model based on attention mechanism.","authors":"Lei Chen, Rui Liu, Dongsheng Zhou, Xin Yang, Qiang Zhang","doi":"10.1186/s42492-020-00045-x","DOIUrl":null,"url":null,"abstract":"<p><p>With the rapid development of deep learning technology, behavior recognition based on video streams has made great progress in recent years. However, there are also some problems that must be solved: (1) In order to improve behavior recognition performance, the models have tended to become deeper, wider, and more complex. However, some new problems have been introduced also, such as that their real-time performance decreases; (2) Some actions in existing datasets are so similar that they are difficult to distinguish. To solve these problems, the ResNet34-3DRes18 model, which is a lightweight and efficient two-dimensional (2D) and three-dimensional (3D) fused model, is constructed in this study. The model used 2D convolutional neural network (2DCNN) to obtain the feature maps of input images and 3D convolutional neural network (3DCNN) to process the temporal relationships between frames, which made the model not only make use of 3DCNN's advantages on video temporal modeling but reduced model complexity. Compared with state-of-the-art models, this method has shown excellent performance at a faster speed. Furthermore, to distinguish between similar motions in the datasets, an attention gate mechanism is added, and a Res34-SE-IM-Net attention recognition model is constructed. The Res34-SE-IM-Net achieved 71.85%, 92.196%, and 36.5% top-1 accuracy (The predicting label obtained from model is the largest one in the output probability vector. If the label is the same as the target label of the motion, the classification is correct.) respectively on the test sets of the HMDB51, UCF101, and Something-Something v1 datasets.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"3 1","pages":"7"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7099545/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Computing for Industry, Biomedicine, and Art","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1186/s42492-020-00045-x","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

Abstract

With the rapid development of deep learning technology, behavior recognition based on video streams has made great progress in recent years. However, there are also some problems that must be solved: (1) In order to improve behavior recognition performance, the models have tended to become deeper, wider, and more complex. However, some new problems have been introduced also, such as that their real-time performance decreases; (2) Some actions in existing datasets are so similar that they are difficult to distinguish. To solve these problems, the ResNet34-3DRes18 model, which is a lightweight and efficient two-dimensional (2D) and three-dimensional (3D) fused model, is constructed in this study. The model used 2D convolutional neural network (2DCNN) to obtain the feature maps of input images and 3D convolutional neural network (3DCNN) to process the temporal relationships between frames, which made the model not only make use of 3DCNN's advantages on video temporal modeling but reduced model complexity. Compared with state-of-the-art models, this method has shown excellent performance at a faster speed. Furthermore, to distinguish between similar motions in the datasets, an attention gate mechanism is added, and a Res34-SE-IM-Net attention recognition model is constructed. The Res34-SE-IM-Net achieved 71.85%, 92.196%, and 36.5% top-1 accuracy (The predicting label obtained from model is the largest one in the output probability vector. If the label is the same as the target label of the motion, the classification is correct.) respectively on the test sets of the HMDB51, UCF101, and Something-Something v1 datasets.

Abstract Image

Abstract Image

Abstract Image

基于注意力机制的融合行为识别模型。
随着深度学习技术的飞速发展,近年来基于视频流的行为识别技术取得了长足的进步。但是,也存在一些亟待解决的问题:(1)为了提高行为识别性能,模型趋向于更深、更广、更复杂。但同时也带来了一些新问题,如实时性下降;(2)现有数据集中的一些行为非常相似,难以区分。为了解决这些问题,本研究构建了 ResNet34-3DRes18 模型,它是一种轻量级、高效的二维(2D)和三维(3D)融合模型。该模型使用二维卷积神经网络(2DCNN)获取输入图像的特征图,使用三维卷积神经网络(3DCNN)处理帧间的时间关系,这使得该模型不仅发挥了 3DCNN 在视频时间建模方面的优势,而且降低了模型的复杂度。与最先进的模型相比,该方法以更快的速度表现出卓越的性能。此外,为了区分数据集中的相似运动,还加入了注意力门机制,并构建了 Res34-SE-IM-Net 注意力识别模型。Res34-SE-IM-Net的top-1准确率分别为71.85%、92.196%和36.5%(模型得到的预测标签是输出概率向量中最大的标签。如果该标签与运动的目标标签相同,则分类正确)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Visual Computing for Industry, Biomedicine, and Art
Visual Computing for Industry, Biomedicine, and Art Arts and Humanities-Visual Arts and Performing Arts
CiteScore
5.60
自引率
0.00%
发文量
28
审稿时长
5 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信