基于IndRNN的空间和频域长期时间识别

Beidi Zhao, Shuai Li, Yanbo Gao
{"title":"基于IndRNN的空间和频域长期时间识别","authors":"Beidi Zhao, Shuai Li, Yanbo Gao","doi":"10.1145/3410530.3414355","DOIUrl":null,"url":null,"abstract":"This paper targets the SHL recognition challenge, which focuses on the location-independent and user-independent activity recognition using smartphone sensors. To address this long-range temporal problem with periodic nature, we propose a new approach (team IndRNN), an Independently Recurrent Neural Network (IndRNN) based long-term temporal activity recognition with spatial and frequency domain features. The data is first segmented into one second sliding windows, then temporal and frequency domain features are extracted as short-term temporal features. A deep IndRNN model is used to predict the unknown test dataset location. Under the predicted location, a deep IndRNN model is further used to classify the 8 activities with best performed features. Finally, transfer learning and model fusion are used to improve the result under the user-independence case. The proposed method achieves 86.94% accuracy on the validation set at the predicted location.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"37 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"IndRNN based long-term temporal recognition in the spatial and frequency domain\",\"authors\":\"Beidi Zhao, Shuai Li, Yanbo Gao\",\"doi\":\"10.1145/3410530.3414355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper targets the SHL recognition challenge, which focuses on the location-independent and user-independent activity recognition using smartphone sensors. To address this long-range temporal problem with periodic nature, we propose a new approach (team IndRNN), an Independently Recurrent Neural Network (IndRNN) based long-term temporal activity recognition with spatial and frequency domain features. The data is first segmented into one second sliding windows, then temporal and frequency domain features are extracted as short-term temporal features. A deep IndRNN model is used to predict the unknown test dataset location. Under the predicted location, a deep IndRNN model is further used to classify the 8 activities with best performed features. Finally, transfer learning and model fusion are used to improve the result under the user-independence case. The proposed method achieves 86.94% accuracy on the validation set at the predicted location.\",\"PeriodicalId\":7183,\"journal\":{\"name\":\"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers\",\"volume\":\"37 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3410530.3414355\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3410530.3414355","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

摘要

本文针对SHL识别挑战,重点研究了基于智能手机传感器的位置无关和用户无关的活动识别。为了解决这种具有周期性的长期时间问题,我们提出了一种新的方法(IndRNN团队),一种基于空间和频域特征的独立递归神经网络(IndRNN)的长期时间活动识别。首先将数据分割成1秒滑动窗口,然后提取时域和频域特征作为短期时域特征。使用深度IndRNN模型预测未知测试数据集的位置。在预测位置下,进一步使用深度IndRNN模型对8个特征表现最好的活动进行分类。最后,利用迁移学习和模型融合对用户独立情况下的结果进行改进。该方法在预测位置的验证集上达到了86.94%的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
IndRNN based long-term temporal recognition in the spatial and frequency domain
This paper targets the SHL recognition challenge, which focuses on the location-independent and user-independent activity recognition using smartphone sensors. To address this long-range temporal problem with periodic nature, we propose a new approach (team IndRNN), an Independently Recurrent Neural Network (IndRNN) based long-term temporal activity recognition with spatial and frequency domain features. The data is first segmented into one second sliding windows, then temporal and frequency domain features are extracted as short-term temporal features. A deep IndRNN model is used to predict the unknown test dataset location. Under the predicted location, a deep IndRNN model is further used to classify the 8 activities with best performed features. Finally, transfer learning and model fusion are used to improve the result under the user-independence case. The proposed method achieves 86.94% accuracy on the validation set at the predicted location.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信