用于下肢外骨骼运动模式识别的卷积增强视觉变换器方法

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Expert Systems Pub Date : 2024-06-18 DOI:10.1111/exsy.13659
Jianbin Zheng, Chaojie Wang, Liping Huang, Yifan Gao, Ruoxi Yan, Chunbo Yang, Yang Gao, Yu Wang
{"title":"用于下肢外骨骼运动模式识别的卷积增强视觉变换器方法","authors":"Jianbin Zheng,&nbsp;Chaojie Wang,&nbsp;Liping Huang,&nbsp;Yifan Gao,&nbsp;Ruoxi Yan,&nbsp;Chunbo Yang,&nbsp;Yang Gao,&nbsp;Yu Wang","doi":"10.1111/exsy.13659","DOIUrl":null,"url":null,"abstract":"<p>Providing the human body with smooth and natural assistance through lower limb exoskeletons is crucial. However, a significant challenge is identifying various locomotion modes to enable the exoskeleton to offer seamless support. In this study, we propose a method for locomotion mode recognition named Convolution-enhanced Vision Transformer (Conv-ViT). This method maximizes the benefits of convolution for feature extraction and fusion, as well as the self-attention mechanism of the Transformer, to efficiently capture and handle long-term dependencies among different positions within the input sequence. By equipping the exoskeleton with inertial measurement units, we collected motion data from 27 healthy subjects, using it as input to train the Conv-ViT model. To ensure the exoskeleton's stability and safety during transitions between various locomotion modes, we not only examined the typical five steady modes (involving walking on level ground [WL], stair ascent [SA], stair descent [SD], ramp ascent [RA], and ramp descent [RD]) but also extensively explored eight locomotion transitions (including WL-SA, WL-SD, WL-RA, WL-RD, SA-WL, SD-WL, RA-WL, RD-WL). In tasks involving the recognition of five steady locomotions and eight transitions, the recognition accuracy reached 98.87% and 96.74%, respectively. Compared with three popular algorithms, ViT, convolutional neural networks, and support vector machine, the results show that the proposed method has the best recognition performance, and there are highly significant differences in accuracy and F1 score compared to other methods. Finally, we also demonstrated the excellent performance of Conv-ViT in terms of generalization performance.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Convolution-enhanced vision transformer method for lower limb exoskeleton locomotion mode recognition\",\"authors\":\"Jianbin Zheng,&nbsp;Chaojie Wang,&nbsp;Liping Huang,&nbsp;Yifan Gao,&nbsp;Ruoxi Yan,&nbsp;Chunbo Yang,&nbsp;Yang Gao,&nbsp;Yu Wang\",\"doi\":\"10.1111/exsy.13659\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Providing the human body with smooth and natural assistance through lower limb exoskeletons is crucial. However, a significant challenge is identifying various locomotion modes to enable the exoskeleton to offer seamless support. In this study, we propose a method for locomotion mode recognition named Convolution-enhanced Vision Transformer (Conv-ViT). This method maximizes the benefits of convolution for feature extraction and fusion, as well as the self-attention mechanism of the Transformer, to efficiently capture and handle long-term dependencies among different positions within the input sequence. By equipping the exoskeleton with inertial measurement units, we collected motion data from 27 healthy subjects, using it as input to train the Conv-ViT model. To ensure the exoskeleton's stability and safety during transitions between various locomotion modes, we not only examined the typical five steady modes (involving walking on level ground [WL], stair ascent [SA], stair descent [SD], ramp ascent [RA], and ramp descent [RD]) but also extensively explored eight locomotion transitions (including WL-SA, WL-SD, WL-RA, WL-RD, SA-WL, SD-WL, RA-WL, RD-WL). In tasks involving the recognition of five steady locomotions and eight transitions, the recognition accuracy reached 98.87% and 96.74%, respectively. Compared with three popular algorithms, ViT, convolutional neural networks, and support vector machine, the results show that the proposed method has the best recognition performance, and there are highly significant differences in accuracy and F1 score compared to other methods. Finally, we also demonstrated the excellent performance of Conv-ViT in terms of generalization performance.</p>\",\"PeriodicalId\":51053,\"journal\":{\"name\":\"Expert Systems\",\"volume\":\"41 10\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/exsy.13659\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.13659","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

通过下肢外骨骼为人体提供流畅自然的辅助至关重要。然而,如何识别各种运动模式,使外骨骼能够提供无缝支持是一项重大挑战。在这项研究中,我们提出了一种运动模式识别方法,名为卷积增强视觉变换器(Conv-ViT)。该方法最大限度地利用了卷积在特征提取和融合方面的优势,以及变形器的自我注意机制,从而有效地捕捉和处理输入序列中不同位置之间的长期依赖关系。通过给外骨骼配备惯性测量单元,我们收集了 27 名健康受试者的运动数据,并将其作为训练 Conv-ViT 模型的输入。为了确保外骨骼在各种运动模式之间转换时的稳定性和安全性,我们不仅研究了典型的五种稳定模式(包括平地行走[WL]、楼梯上升[SA]、楼梯下降[SD]、斜坡上升[RA]和斜坡下降[RD]),还广泛研究了八种运动转换模式(包括WL-SA、WL-SD、WL-RA、WL-RD、SA-WL、SD-WL、RA-WL、RD-WL)。在识别五种稳定运动和八种转换运动的任务中,识别准确率分别达到了 98.87% 和 96.74%。与 ViT、卷积神经网络和支持向量机这三种流行算法相比,结果表明所提出的方法具有最佳的识别性能,与其他方法相比,在准确率和 F1 分数上都有非常显著的差异。最后,我们还证明了 Conv-ViT 在泛化性能方面的优异表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Convolution-enhanced vision transformer method for lower limb exoskeleton locomotion mode recognition

Providing the human body with smooth and natural assistance through lower limb exoskeletons is crucial. However, a significant challenge is identifying various locomotion modes to enable the exoskeleton to offer seamless support. In this study, we propose a method for locomotion mode recognition named Convolution-enhanced Vision Transformer (Conv-ViT). This method maximizes the benefits of convolution for feature extraction and fusion, as well as the self-attention mechanism of the Transformer, to efficiently capture and handle long-term dependencies among different positions within the input sequence. By equipping the exoskeleton with inertial measurement units, we collected motion data from 27 healthy subjects, using it as input to train the Conv-ViT model. To ensure the exoskeleton's stability and safety during transitions between various locomotion modes, we not only examined the typical five steady modes (involving walking on level ground [WL], stair ascent [SA], stair descent [SD], ramp ascent [RA], and ramp descent [RD]) but also extensively explored eight locomotion transitions (including WL-SA, WL-SD, WL-RA, WL-RD, SA-WL, SD-WL, RA-WL, RD-WL). In tasks involving the recognition of five steady locomotions and eight transitions, the recognition accuracy reached 98.87% and 96.74%, respectively. Compared with three popular algorithms, ViT, convolutional neural networks, and support vector machine, the results show that the proposed method has the best recognition performance, and there are highly significant differences in accuracy and F1 score compared to other methods. Finally, we also demonstrated the excellent performance of Conv-ViT in terms of generalization performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Expert Systems
Expert Systems 工程技术-计算机:理论方法
CiteScore
7.40
自引率
6.10%
发文量
266
审稿时长
24 months
期刊介绍: Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper. As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信