A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control.

IF 3.4 Q2 ENGINEERING, BIOMEDICAL
Wearable technologies Pub Date : 2022-09-06 eCollection Date: 2022-01-01 DOI:10.1017/wtc.2022.18
Qiang Zhang, Natalie Fragnito, Xuefeng Bao, Nitin Sharma
{"title":"A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control.","authors":"Qiang Zhang, Natalie Fragnito, Xuefeng Bao, Nitin Sharma","doi":"10.1017/wtc.2022.18","DOIUrl":null,"url":null,"abstract":"<p><p>Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging's region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ( < .001) and increased the prediction coefficient of determination by 20.13% ( < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy.</p>","PeriodicalId":75318,"journal":{"name":"Wearable technologies","volume":null,"pages":null},"PeriodicalIF":3.4000,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936300/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wearable technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/wtc.2022.18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging's region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ( < .001) and increased the prediction coefficient of determination by 20.13% ( < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy.

利用超声成像预测不同速度行走时踝关节力矩的深度学习方法:辅助设备控制框架
摘要机器人辅助或康复设备是神经系统疾病患者很有前途的辅助设备,因为它们有助于恢复上肢和下肢的正常功能。然而,在使用这些机器人设备时,以非侵入性的方式准确估计人类意图或剩余努力仍然具有挑战性。在本文中,我们提出了一种深度学习方法,该方法使用骨骼肌超声(US)成像的亮度模式,即B模式,来预测行走时踝关节净跖屈力矩。定制深度卷积神经网络(CNNs)的设计结构保证了深度学习方法的收敛性和鲁棒性。我们研究了US成像的感兴趣区域(ROI)对净跖屈力矩预测性能的影响。我们还比较了使用相同ROI大小的B模式US和sEMG频谱成像的基于CNN的矩预测性能。八名年轻参与者在跑步机上以多种速度行走的实验结果验证了通过使用所提出的美国成像+深度学习方法预测净关节力矩的准确性提高。与使用sEMG频谱成像的预测性能相比,US成像显著降低了37.55%($p$<.001)的归一化预测均方根误差,并将预测决定系数提高了20.13%($p$<.001)。研究结果表明,US成像+深度学习方法个性化了对人类共同自愿努力的评估,其可以与辅助或康复装置结合以基于所需的辅助控制策略来提高临床性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.80
自引率
0.00%
发文量
0
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信