A novel ViT-BILSTM model for physical activity intensity classification in adults using gravity-based acceleration.

Lin Wang, Zizhang Luo, Tianle Zhang
{"title":"A novel ViT-BILSTM model for physical activity intensity classification in adults using gravity-based acceleration.","authors":"Lin Wang, Zizhang Luo, Tianle Zhang","doi":"10.1186/s42490-025-00088-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim: </strong>The aim of this study is to apply a novel hybrid framework incorporating a Vision Transformer (ViT) and bidirectional long short-term memory (Bi-LSTM) model for classifying physical activity intensity (PAI) in adults using gravity-based acceleration. Additionally, it further investigates how PAI and temporal window (TW) impacts the model' s accuracy.</p><p><strong>Method: </strong>This research used the Capture-24 dataset, consisting of raw accelerometer data from 151 participants aged 18 to 91. Gravity-based acceleration was utilised to generate images encoding various PAIs. These images were subsequently analysed using the ViT-BiLSTM model, with results presented in confusion matrices and compared with baseline models. The model's robustness was evaluated through temporal stability testing and examination of accuracy and loss curves.</p><p><strong>Result: </strong>The ViT-BiLSTM model excelled in PAI classification task, achieving an overall accuracy of 98.5% ± 1.48% across five TWs-98.7% for 1s, 98.1% for 5s, 98.2% for 10s, 99% for 15s, and 98.65% for 30s of TW. The model consistently exhibited superior accuracy in predicting sedentary (98.9% ± 1%) compared to light physical activity (98.2% ± 2%) and moderate-to-vigorous physical activity (98.2% ± 3%). ANOVA showed no significant accuracy variation across PAIs (F = 2.18, p = 0.13) and TW (F = 0.52, p = 0.72). Accuracy and loss curves show the model consistently improves its performance across epochs, demonstrating its excellent robustness.</p><p><strong>Conclusion: </strong>This study demonstrates the ViT-BiLSTM model's efficacy in classifying PAI using gravity-based acceleration, with performance remaining consistent across diverse TWs and intensities. However, PAI and TW could result in slight variations in the model's performance. Future research should concern and investigate the impact of gravity-based acceleration on PAI thresholds, which may influence model's robustness and reliability.</p>","PeriodicalId":72425,"journal":{"name":"BMC biomedical engineering","volume":"7 1","pages":"2"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11786420/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC biomedical engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s42490-025-00088-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Aim: The aim of this study is to apply a novel hybrid framework incorporating a Vision Transformer (ViT) and bidirectional long short-term memory (Bi-LSTM) model for classifying physical activity intensity (PAI) in adults using gravity-based acceleration. Additionally, it further investigates how PAI and temporal window (TW) impacts the model' s accuracy.

Method: This research used the Capture-24 dataset, consisting of raw accelerometer data from 151 participants aged 18 to 91. Gravity-based acceleration was utilised to generate images encoding various PAIs. These images were subsequently analysed using the ViT-BiLSTM model, with results presented in confusion matrices and compared with baseline models. The model's robustness was evaluated through temporal stability testing and examination of accuracy and loss curves.

Result: The ViT-BiLSTM model excelled in PAI classification task, achieving an overall accuracy of 98.5% ± 1.48% across five TWs-98.7% for 1s, 98.1% for 5s, 98.2% for 10s, 99% for 15s, and 98.65% for 30s of TW. The model consistently exhibited superior accuracy in predicting sedentary (98.9% ± 1%) compared to light physical activity (98.2% ± 2%) and moderate-to-vigorous physical activity (98.2% ± 3%). ANOVA showed no significant accuracy variation across PAIs (F = 2.18, p = 0.13) and TW (F = 0.52, p = 0.72). Accuracy and loss curves show the model consistently improves its performance across epochs, demonstrating its excellent robustness.

Conclusion: This study demonstrates the ViT-BiLSTM model's efficacy in classifying PAI using gravity-based acceleration, with performance remaining consistent across diverse TWs and intensities. However, PAI and TW could result in slight variations in the model's performance. Future research should concern and investigate the impact of gravity-based acceleration on PAI thresholds, which may influence model's robustness and reliability.

基于重力加速度的成人运动强度分类新模型viti - bilstm。
目的:本研究的目的是应用一种新的混合框架,结合视觉变压器(ViT)和双向长短期记忆(Bi-LSTM)模型,在基于重力加速度的成人运动强度(PAI)分类中进行研究。此外,本文还进一步探讨了PAI和时间窗对模型精度的影响。方法:本研究使用了Capture-24数据集,由151名年龄在18岁至91岁之间的参与者的原始加速度计数据组成。利用基于重力的加速度来生成编码各种PAIs的图像。随后使用ViT-BiLSTM模型对这些图像进行分析,结果显示在混淆矩阵中,并与基线模型进行比较。模型的稳健性通过时间稳定性测试和准确性和损失曲线的检验来评估。结果:ViT-BiLSTM模型在PAI分类任务中表现优异,5个TW的总体准确率为98.5%±1.48%,对TW的15秒分类准确率为98.7%,对5秒分类准确率为98.1%,对10秒分类准确率为98.2%,对15秒分类准确率为99%,对30秒分类准确率为98.65%。与轻度体力活动(98.2%±2%)和中度至剧烈体力活动(98.2%±3%)相比,该模型在预测久坐(98.9%±1%)方面始终表现出更高的准确性。方差分析显示PAIs (F = 2.18, p = 0.13)和TW (F = 0.52, p = 0.72)之间的准确性无显著差异。精度和损失曲线表明,该模型在不同时期的性能持续提高,显示出良好的鲁棒性。结论:本研究证明了ViT-BiLSTM模型对基于重力加速度的PAI进行分类的有效性,并且在不同的TWs和强度下性能保持一致。然而,PAI和TW可能会导致模型性能的轻微变化。未来的研究应该关注和研究重力加速度对PAI阈值的影响,这可能会影响模型的鲁棒性和可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
审稿时长
19 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信