{"title":"使用可穿戴传感器的深度人类活动识别","authors":"I. A. Lawal, Sophia Bano","doi":"10.1145/3316782.3321538","DOIUrl":null,"url":null,"abstract":"This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset.","PeriodicalId":264425,"journal":{"name":"Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Deep human activity recognition using wearable sensors\",\"authors\":\"I. A. Lawal, Sophia Bano\",\"doi\":\"10.1145/3316782.3321538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset.\",\"PeriodicalId\":264425,\"journal\":{\"name\":\"Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3316782.3321538\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3316782.3321538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep human activity recognition using wearable sensors
This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset.