FPGA-based Lightweight QDS-CNN System for sEMG Gesture and Force Level Recognition.

Yusen Guo, Guangyang Gou, Pan Yao, Fupeng Gao, Tianjun Ma, Jianhai Sun, Mengdi Han, Jianqun Cheng, Chunxiu Liu, Ming Zhao, Ning Xue
{"title":"FPGA-based Lightweight QDS-CNN System for sEMG Gesture and Force Level Recognition.","authors":"Yusen Guo, Guangyang Gou, Pan Yao, Fupeng Gao, Tianjun Ma, Jianhai Sun, Mengdi Han, Jianqun Cheng, Chunxiu Liu, Ming Zhao, Ning Xue","doi":"10.1109/TBCAS.2024.3364235","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning (DL) has been used for electromyographic (EMG) signal recognition and achieved high accuracy for multiple classification tasks. However, implementation in resource-constrained prostheses and human-computer interaction devices remains challenging. To overcome these problems, this paper implemented a low-power system for EMG gesture and force level recognition using Zynq architecture. Firstly, a lightweight network model structure was proposed by Ultra-lightweight depth separable convolution (UL-DSC) and channel attention-global average pooling (CA-GAP) to reduce the computational complexity while maintaining accuracy. A wearable EMG acquisition device for real-time data acquisition was subsequently developed with size of 36mm×28mm×4mm. Finally, a highly parallelized dedicated hardware accelerator architecture was designed for inference computation. 18 gestures were tested, including force levels from 22 healthy subjects. The results indicate that the average accuracy rate was 94.92% for a model with 5.0k parameters and a size of 0.026MB. Specifically, the average recognition accuracy for static and force-level gestures was 98.47% and 89.92%, respectively. The proposed hardware accelerator architecture was deployed with 8-bit precision, a single-frame signal inference time of 41.9μs, a power consumption of 0.317W, and a data throughput of 78.6 GOP/s.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biomedical circuits and systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TBCAS.2024.3364235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning (DL) has been used for electromyographic (EMG) signal recognition and achieved high accuracy for multiple classification tasks. However, implementation in resource-constrained prostheses and human-computer interaction devices remains challenging. To overcome these problems, this paper implemented a low-power system for EMG gesture and force level recognition using Zynq architecture. Firstly, a lightweight network model structure was proposed by Ultra-lightweight depth separable convolution (UL-DSC) and channel attention-global average pooling (CA-GAP) to reduce the computational complexity while maintaining accuracy. A wearable EMG acquisition device for real-time data acquisition was subsequently developed with size of 36mm×28mm×4mm. Finally, a highly parallelized dedicated hardware accelerator architecture was designed for inference computation. 18 gestures were tested, including force levels from 22 healthy subjects. The results indicate that the average accuracy rate was 94.92% for a model with 5.0k parameters and a size of 0.026MB. Specifically, the average recognition accuracy for static and force-level gestures was 98.47% and 89.92%, respectively. The proposed hardware accelerator architecture was deployed with 8-bit precision, a single-frame signal inference time of 41.9μs, a power consumption of 0.317W, and a data throughput of 78.6 GOP/s.

基于 FPGA 的轻量级 QDS-CNN 系统用于 sEMG 手势和力级识别。
深度学习(DL)已被用于肌电图(EMG)信号识别,并在多个分类任务中取得了很高的准确率。然而,在资源受限的假肢和人机交互设备中实施深度学习仍然具有挑战性。为了克服这些问题,本文利用 Zynq 架构实现了一个用于肌电图手势和力水平识别的低功耗系统。首先,通过超轻量级深度可分离卷积(UL-DSC)和通道注意-全局平均池化(CA-GAP)提出了一种轻量级网络模型结构,以在保持准确性的同时降低计算复杂度。随后,还开发了用于实时采集数据的可穿戴肌电采集设备,其尺寸为 36mm×28mm×4mm。最后,为推理计算设计了高度并行化的专用硬件加速器架构。测试了 18 种手势,包括 22 名健康受试者的力水平。结果表明,参数为 5.0k 且大小为 0.026MB 的模型的平均准确率为 94.92%。具体来说,静态手势和力水平手势的平均识别准确率分别为 98.47% 和 89.92%。所提出的硬件加速器架构的精度为 8 位,单帧信号推理时间为 41.9μs,功耗为 0.317W,数据吞吐量为 78.6 GOP/s。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信