基于FFT谱图的多输入CNN模型的人体活动识别

Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda
{"title":"基于FFT谱图的多输入CNN模型的人体活动识别","authors":"Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda","doi":"10.1145/3410530.3414342","DOIUrl":null,"url":null,"abstract":"An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"56 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Human activity recognition using multi-input CNN model with FFT spectrograms\",\"authors\":\"Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda\",\"doi\":\"10.1145/3410530.3414342\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.\",\"PeriodicalId\":7183,\"journal\":{\"name\":\"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers\",\"volume\":\"56 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3410530.3414342\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3410530.3414342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

摘要

描述了由DSML-TDU团队为sussexhuawei运动运输(SHL)识别挑战开发的一种活动识别方法。自2018年的挑战赛以来,我们的团队一直在使用来自移动传感器的快速傅立叶变换(FFT)频谱图开发基于卷积神经网络(CNN)的人类活动识别模型。在2020年的挑战中,我们开发了我们的模型,以适应在特定位置配备传感器的各种用户。从线性加速度计、陀螺仪和磁传感器数据的三个轴生成的FFT频谱图的九种模态被用作我们模型的输入数据。首先,我们创建了一个CNN模型,从训练数据和验证数据中估计四种保持姿势(包、手、臀部和躯干)。所提供的测试数据预计来自Hips。接下来,我们创建了另一个(预训练的)CNN模型,从大量的用户1训练数据(Hips)中估计8个活动。然后,通过使用少量用户2和用户3 (Hips)的验证数据,对该模型进行了针对不同用户的微调。最后,通过5倍交叉验证,f值为96.7%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Human activity recognition using multi-input CNN model with FFT spectrograms
An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信