{"title":"基于图神经网络的交互推理改进动作识别","authors":"Wu Luo, Chongyang Zhang, Xiaoyun Zhang, Haiyan Wu","doi":"10.1109/VCIP47243.2019.8965768","DOIUrl":null,"url":null,"abstract":"Recent human action recognition methods mainly model a two-stream or 3D convolution deep learning network, with which humans spatial-temporal features can be exploited and utilized effectively. However, due to the ignoring of interaction exploiting, most of these methods cannot get good enough performance. In this paper, we propose a novel action recognition framework with Graph Convolutional Network (GCN) based Interaction Reasoning: Objects and discriminative scene patches are detected using an object detector and class active mapping (CAM), respectively; and then a GCN is introduced to model the interaction among the detected objects and scene patches. Evaluation of two widely used video action benchmarks shows that the proposed work can achieve comparable performance: the accuracy up to 43.6% at EPIC Kitchen, and 47.0% at VLOG benchmark without using optical flow, respectively.","PeriodicalId":388109,"journal":{"name":"2019 IEEE Visual Communications and Image Processing (VCIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Improving Action Recognition with the Graph-Neural-Network-based Interaction Reasoning\",\"authors\":\"Wu Luo, Chongyang Zhang, Xiaoyun Zhang, Haiyan Wu\",\"doi\":\"10.1109/VCIP47243.2019.8965768\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent human action recognition methods mainly model a two-stream or 3D convolution deep learning network, with which humans spatial-temporal features can be exploited and utilized effectively. However, due to the ignoring of interaction exploiting, most of these methods cannot get good enough performance. In this paper, we propose a novel action recognition framework with Graph Convolutional Network (GCN) based Interaction Reasoning: Objects and discriminative scene patches are detected using an object detector and class active mapping (CAM), respectively; and then a GCN is introduced to model the interaction among the detected objects and scene patches. Evaluation of two widely used video action benchmarks shows that the proposed work can achieve comparable performance: the accuracy up to 43.6% at EPIC Kitchen, and 47.0% at VLOG benchmark without using optical flow, respectively.\",\"PeriodicalId\":388109,\"journal\":{\"name\":\"2019 IEEE Visual Communications and Image Processing (VCIP)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Visual Communications and Image Processing (VCIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP47243.2019.8965768\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP47243.2019.8965768","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Action Recognition with the Graph-Neural-Network-based Interaction Reasoning
Recent human action recognition methods mainly model a two-stream or 3D convolution deep learning network, with which humans spatial-temporal features can be exploited and utilized effectively. However, due to the ignoring of interaction exploiting, most of these methods cannot get good enough performance. In this paper, we propose a novel action recognition framework with Graph Convolutional Network (GCN) based Interaction Reasoning: Objects and discriminative scene patches are detected using an object detector and class active mapping (CAM), respectively; and then a GCN is introduced to model the interaction among the detected objects and scene patches. Evaluation of two widely used video action benchmarks shows that the proposed work can achieve comparable performance: the accuracy up to 43.6% at EPIC Kitchen, and 47.0% at VLOG benchmark without using optical flow, respectively.