{"title":"基于卷积神经网络表征学习的智能手机传感器活动识别","authors":"Tatsuhito Hasegawa, M. Koshino","doi":"10.1145/3372422.3372439","DOIUrl":null,"url":null,"abstract":"Although many researchers have widely investigated activity recognition using smartphone sensing, estimation accuracy can be adversely affected by individual dependence. The result of our survey showed that the process of smartphone sensor based activity recognition that has not been sufficiently discussed, especially using representation learning by Convolutional Neural Network (CNN). The effectiveness of the representation learning model using CNN in activity recognition was verified, as were 10 types of activity recognition models: Deep Neural Network (DNN) using Hand-Crafted (HC) features, simple CNN model, AlexNet, SE-AlexNet, Fully Convolutional Network (FCN), SE-FCN, VGG, SE-VGG, ResNet, and SE-ResNet, using a benchmark dataset for human activity recognition. Finally, the deep learning models were trained a total of 600 times (10 models, 6 types with varying the number of people in training dataset, and 10 trials to reduce the influence of randomness bias). The results indicate that SE-VGG is the most accurate, as many subjects can be comprised in the training data.","PeriodicalId":118684,"journal":{"name":"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Representation Learning by Convolutional Neural Network for Smartphone Sensor Based Activity Recognition\",\"authors\":\"Tatsuhito Hasegawa, M. Koshino\",\"doi\":\"10.1145/3372422.3372439\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although many researchers have widely investigated activity recognition using smartphone sensing, estimation accuracy can be adversely affected by individual dependence. The result of our survey showed that the process of smartphone sensor based activity recognition that has not been sufficiently discussed, especially using representation learning by Convolutional Neural Network (CNN). The effectiveness of the representation learning model using CNN in activity recognition was verified, as were 10 types of activity recognition models: Deep Neural Network (DNN) using Hand-Crafted (HC) features, simple CNN model, AlexNet, SE-AlexNet, Fully Convolutional Network (FCN), SE-FCN, VGG, SE-VGG, ResNet, and SE-ResNet, using a benchmark dataset for human activity recognition. Finally, the deep learning models were trained a total of 600 times (10 models, 6 types with varying the number of people in training dataset, and 10 trials to reduce the influence of randomness bias). The results indicate that SE-VGG is the most accurate, as many subjects can be comprised in the training data.\",\"PeriodicalId\":118684,\"journal\":{\"name\":\"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3372422.3372439\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 2nd International Conference on Computational Intelligence and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3372422.3372439","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Representation Learning by Convolutional Neural Network for Smartphone Sensor Based Activity Recognition
Although many researchers have widely investigated activity recognition using smartphone sensing, estimation accuracy can be adversely affected by individual dependence. The result of our survey showed that the process of smartphone sensor based activity recognition that has not been sufficiently discussed, especially using representation learning by Convolutional Neural Network (CNN). The effectiveness of the representation learning model using CNN in activity recognition was verified, as were 10 types of activity recognition models: Deep Neural Network (DNN) using Hand-Crafted (HC) features, simple CNN model, AlexNet, SE-AlexNet, Fully Convolutional Network (FCN), SE-FCN, VGG, SE-VGG, ResNet, and SE-ResNet, using a benchmark dataset for human activity recognition. Finally, the deep learning models were trained a total of 600 times (10 models, 6 types with varying the number of people in training dataset, and 10 trials to reduce the influence of randomness bias). The results indicate that SE-VGG is the most accurate, as many subjects can be comprised in the training data.