Fusion from Multimodal Gait Spatiotemporal Data for Human Gait Speed Classifications

Abdullah S. Alharthi, K. Ozanyan
{"title":"Fusion from Multimodal Gait Spatiotemporal Data for Human Gait Speed Classifications","authors":"Abdullah S. Alharthi, K. Ozanyan","doi":"10.1109/SENSORS47087.2021.9639816","DOIUrl":null,"url":null,"abstract":"Human gait pattens remain largely undefined when relying on a single sensing modality. We report a pilot implementation of sensor fusion to classify gait spatiotemporal signals, from a publicly available dataset of 50 participants, harvested from four different type of sensors. For fusion we propose a hybrid Convolutional Neural Network and Long Short-Term Memory (hybrid CNN+LSTM) and Multi-stream CNN. The classification results are compared to single modality data using Single-stream CNN, a state-of-the-art Vision Transformer, and statistical classifiers algorithms. The fusion models outperformed the single modality methods and classified gait speed of previously unseen 10 random subjects with 97% F1-score prediction accuracy of the four gait speed classes.","PeriodicalId":6775,"journal":{"name":"2021 IEEE Sensors","volume":"49 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Sensors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SENSORS47087.2021.9639816","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Human gait pattens remain largely undefined when relying on a single sensing modality. We report a pilot implementation of sensor fusion to classify gait spatiotemporal signals, from a publicly available dataset of 50 participants, harvested from four different type of sensors. For fusion we propose a hybrid Convolutional Neural Network and Long Short-Term Memory (hybrid CNN+LSTM) and Multi-stream CNN. The classification results are compared to single modality data using Single-stream CNN, a state-of-the-art Vision Transformer, and statistical classifiers algorithms. The fusion models outperformed the single modality methods and classified gait speed of previously unseen 10 random subjects with 97% F1-score prediction accuracy of the four gait speed classes.
基于多模态步态时空数据的人类步态速度分类
当依赖于单一的传感模式时,人类的步态模式在很大程度上仍然不确定。我们报告了一个传感器融合的试点实施,以分类步态时空信号,从50个参与者的公开数据集,从四种不同类型的传感器收集。对于融合,我们提出了一种混合卷积神经网络和长短期记忆(hybrid CNN+LSTM)和多流CNN。分类结果与使用单流CNN、最先进的视觉转换器和统计分类器算法的单模态数据进行比较。融合模型优于单模态方法,对10个未见过的随机受试者进行步态速度分类,4个步态速度类别的f1分预测准确率为97%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信