A Hybrid Convolutional and Graph Neural Network for Human Action Detection in Static Images

IF 1.8 3区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Xinbiao Lu, Hao Xing
{"title":"A Hybrid Convolutional and Graph Neural Network for Human Action Detection in Static Images","authors":"Xinbiao Lu, Hao Xing","doi":"10.1007/s00034-024-02815-x","DOIUrl":null,"url":null,"abstract":"<p>Human action detection in static images is a hot and challenging field within computer vision. Given the limited features of a single image, achieving precision detection results require the full utilization of the image’s intrinsic features, as well as the integration of methods from other fields to process the images for generating additional features. In this paper, we propose a novel dual pathway model for action detection, whose main pathway employs a convolutional neural network to extract image features and predict the probability of the image belonging to each respective action. Meanwhile, the auxiliary pathway uses a pose estimate algorithm to obtain human key points and connection information for constructing a graphical human model for each image. These graphical models are then transformed into graph data and input into a graph neural network for features extracting and probability prediction. Finally, a corresponding connected neural network propose by us is used to fusing the probability vectors generated from the two pathways, which learns the weight of each action class in each vector to enable their subsequent fusion. It is noted that transfer learning is also used in our model to improve the training speed and detection accuracy of it. Experimental results upon three challenging datasets: Stanford40, PPMI and MPII illustrate the superiority of the proposed method.</p>","PeriodicalId":10227,"journal":{"name":"Circuits, Systems and Signal Processing","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Circuits, Systems and Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00034-024-02815-x","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Human action detection in static images is a hot and challenging field within computer vision. Given the limited features of a single image, achieving precision detection results require the full utilization of the image’s intrinsic features, as well as the integration of methods from other fields to process the images for generating additional features. In this paper, we propose a novel dual pathway model for action detection, whose main pathway employs a convolutional neural network to extract image features and predict the probability of the image belonging to each respective action. Meanwhile, the auxiliary pathway uses a pose estimate algorithm to obtain human key points and connection information for constructing a graphical human model for each image. These graphical models are then transformed into graph data and input into a graph neural network for features extracting and probability prediction. Finally, a corresponding connected neural network propose by us is used to fusing the probability vectors generated from the two pathways, which learns the weight of each action class in each vector to enable their subsequent fusion. It is noted that transfer learning is also used in our model to improve the training speed and detection accuracy of it. Experimental results upon three challenging datasets: Stanford40, PPMI and MPII illustrate the superiority of the proposed method.

Abstract Image

用于静态图像中人体动作检测的混合卷积和图神经网络
静态图像中的人体动作检测是计算机视觉领域的一个热点和挑战。由于单张图像的特征有限,要获得精确的检测结果,需要充分利用图像的固有特征,并结合其他领域的方法来处理图像,以生成额外的特征。在本文中,我们提出了一种用于动作检测的新型双通路模型,其主通路采用卷积神经网络提取图像特征,并预测图像属于每个动作的概率。同时,辅助路径使用姿势估计算法获取人体关键点和连接信息,为每幅图像构建图形人体模型。然后将这些图形模型转化为图形数据,并输入图形神经网络进行特征提取和概率预测。最后,我们提出的相应连通神经网络用于融合两种途径生成的概率向量,学习每个向量中每个动作类别的权重,以便进行后续融合。值得注意的是,我们的模型还使用了迁移学习来提高训练速度和检测精度。三个挑战性数据集的实验结果:Stanford40、PPMI 和 MPII 这三个具有挑战性的数据集的实验结果表明了所提出方法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Circuits, Systems and Signal Processing
Circuits, Systems and Signal Processing 工程技术-工程:电子与电气
CiteScore
4.80
自引率
13.00%
发文量
321
审稿时长
4.6 months
期刊介绍: Rapid developments in the analog and digital processing of signals for communication, control, and computer systems have made the theory of electrical circuits and signal processing a burgeoning area of research and design. The aim of Circuits, Systems, and Signal Processing (CSSP) is to help meet the needs of outlets for significant research papers and state-of-the-art review articles in the area. The scope of the journal is broad, ranging from mathematical foundations to practical engineering design. It encompasses, but is not limited to, such topics as linear and nonlinear networks, distributed circuits and systems, multi-dimensional signals and systems, analog filters and signal processing, digital filters and signal processing, statistical signal processing, multimedia, computer aided design, graph theory, neural systems, communication circuits and systems, and VLSI signal processing. The Editorial Board is international, and papers are welcome from throughout the world. The journal is devoted primarily to research papers, but survey, expository, and tutorial papers are also published. Circuits, Systems, and Signal Processing (CSSP) is published twelve times annually.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信