{"title":"融合图像和惯性传感器数据的深度神经网络多模态人体动作识别","authors":"Inhwan Hwang, Geonho Cha, Songhwai Oh","doi":"10.1109/MFI.2017.8170441","DOIUrl":null,"url":null,"abstract":"Human action recognition has been studied in many fields including computer vision and sensor networks using inertial sensors. However, there are limitations such as spatial constraints, occlusions in images, sensor unreliability, and the inconvenience of users. In order to solve these problems we suggest a sensor fusion method for human action recognition exploiting RGB images from a single fixed camera and a single wrist mounted inertial sensor. These two different domain information can complement each other to fill the deficiencies that exist in both image based and inertial sensor based human action recognition methods. We propose two convolutional neural network (CNN) based feature extraction networks for image and inertial sensor data and a recurrent neural network (RNN) based classification network with long short term memory (LSTM) units. Training of deep neural networks and testing are done with synchronized images and sensor data collected from five individuals. The proposed method results in better performance compared to single sensor-based methods with an accuracy of 86.9% in cross-validation. We also verify that the proposed algorithm robustly classifies the target action when there are failures in detecting body joints from images.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Multi-modal human action recognition using deep neural networks fusing image and inertial sensor data\",\"authors\":\"Inhwan Hwang, Geonho Cha, Songhwai Oh\",\"doi\":\"10.1109/MFI.2017.8170441\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human action recognition has been studied in many fields including computer vision and sensor networks using inertial sensors. However, there are limitations such as spatial constraints, occlusions in images, sensor unreliability, and the inconvenience of users. In order to solve these problems we suggest a sensor fusion method for human action recognition exploiting RGB images from a single fixed camera and a single wrist mounted inertial sensor. These two different domain information can complement each other to fill the deficiencies that exist in both image based and inertial sensor based human action recognition methods. We propose two convolutional neural network (CNN) based feature extraction networks for image and inertial sensor data and a recurrent neural network (RNN) based classification network with long short term memory (LSTM) units. Training of deep neural networks and testing are done with synchronized images and sensor data collected from five individuals. The proposed method results in better performance compared to single sensor-based methods with an accuracy of 86.9% in cross-validation. We also verify that the proposed algorithm robustly classifies the target action when there are failures in detecting body joints from images.\",\"PeriodicalId\":402371,\"journal\":{\"name\":\"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MFI.2017.8170441\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI.2017.8170441","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-modal human action recognition using deep neural networks fusing image and inertial sensor data
Human action recognition has been studied in many fields including computer vision and sensor networks using inertial sensors. However, there are limitations such as spatial constraints, occlusions in images, sensor unreliability, and the inconvenience of users. In order to solve these problems we suggest a sensor fusion method for human action recognition exploiting RGB images from a single fixed camera and a single wrist mounted inertial sensor. These two different domain information can complement each other to fill the deficiencies that exist in both image based and inertial sensor based human action recognition methods. We propose two convolutional neural network (CNN) based feature extraction networks for image and inertial sensor data and a recurrent neural network (RNN) based classification network with long short term memory (LSTM) units. Training of deep neural networks and testing are done with synchronized images and sensor data collected from five individuals. The proposed method results in better performance compared to single sensor-based methods with an accuracy of 86.9% in cross-validation. We also verify that the proposed algorithm robustly classifies the target action when there are failures in detecting body joints from images.