Yusen Guo, Guangyang Gou, Pan Yao, Fupeng Gao, Tianjun Ma, Jianhai Sun, Mengdi Han, Jianqun Cheng, Chunxiu Liu, Ming Zhao, Ning Xue
{"title":"基于 FPGA 的轻量级 QDS-CNN 系统用于 sEMG 手势和力级识别。","authors":"Yusen Guo, Guangyang Gou, Pan Yao, Fupeng Gao, Tianjun Ma, Jianhai Sun, Mengdi Han, Jianqun Cheng, Chunxiu Liu, Ming Zhao, Ning Xue","doi":"10.1109/TBCAS.2024.3364235","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning (DL) has been used for electromyographic (EMG) signal recognition and achieved high accuracy for multiple classification tasks. However, implementation in resource-constrained prostheses and human-computer interaction devices remains challenging. To overcome these problems, this paper implemented a low-power system for EMG gesture and force level recognition using Zynq architecture. Firstly, a lightweight network model structure was proposed by Ultra-lightweight depth separable convolution (UL-DSC) and channel attention-global average pooling (CA-GAP) to reduce the computational complexity while maintaining accuracy. A wearable EMG acquisition device for real-time data acquisition was subsequently developed with size of 36mm×28mm×4mm. Finally, a highly parallelized dedicated hardware accelerator architecture was designed for inference computation. 18 gestures were tested, including force levels from 22 healthy subjects. The results indicate that the average accuracy rate was 94.92% for a model with 5.0k parameters and a size of 0.026MB. Specifically, the average recognition accuracy for static and force-level gestures was 98.47% and 89.92%, respectively. The proposed hardware accelerator architecture was deployed with 8-bit precision, a single-frame signal inference time of 41.9μs, a power consumption of 0.317W, and a data throughput of 78.6 GOP/s.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FPGA-based Lightweight QDS-CNN System for sEMG Gesture and Force Level Recognition.\",\"authors\":\"Yusen Guo, Guangyang Gou, Pan Yao, Fupeng Gao, Tianjun Ma, Jianhai Sun, Mengdi Han, Jianqun Cheng, Chunxiu Liu, Ming Zhao, Ning Xue\",\"doi\":\"10.1109/TBCAS.2024.3364235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Deep learning (DL) has been used for electromyographic (EMG) signal recognition and achieved high accuracy for multiple classification tasks. However, implementation in resource-constrained prostheses and human-computer interaction devices remains challenging. To overcome these problems, this paper implemented a low-power system for EMG gesture and force level recognition using Zynq architecture. Firstly, a lightweight network model structure was proposed by Ultra-lightweight depth separable convolution (UL-DSC) and channel attention-global average pooling (CA-GAP) to reduce the computational complexity while maintaining accuracy. A wearable EMG acquisition device for real-time data acquisition was subsequently developed with size of 36mm×28mm×4mm. Finally, a highly parallelized dedicated hardware accelerator architecture was designed for inference computation. 18 gestures were tested, including force levels from 22 healthy subjects. The results indicate that the average accuracy rate was 94.92% for a model with 5.0k parameters and a size of 0.026MB. Specifically, the average recognition accuracy for static and force-level gestures was 98.47% and 89.92%, respectively. The proposed hardware accelerator architecture was deployed with 8-bit precision, a single-frame signal inference time of 41.9μs, a power consumption of 0.317W, and a data throughput of 78.6 GOP/s.</p>\",\"PeriodicalId\":94031,\"journal\":{\"name\":\"IEEE transactions on biomedical circuits and systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on biomedical circuits and systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TBCAS.2024.3364235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biomedical circuits and systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TBCAS.2024.3364235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FPGA-based Lightweight QDS-CNN System for sEMG Gesture and Force Level Recognition.
Deep learning (DL) has been used for electromyographic (EMG) signal recognition and achieved high accuracy for multiple classification tasks. However, implementation in resource-constrained prostheses and human-computer interaction devices remains challenging. To overcome these problems, this paper implemented a low-power system for EMG gesture and force level recognition using Zynq architecture. Firstly, a lightweight network model structure was proposed by Ultra-lightweight depth separable convolution (UL-DSC) and channel attention-global average pooling (CA-GAP) to reduce the computational complexity while maintaining accuracy. A wearable EMG acquisition device for real-time data acquisition was subsequently developed with size of 36mm×28mm×4mm. Finally, a highly parallelized dedicated hardware accelerator architecture was designed for inference computation. 18 gestures were tested, including force levels from 22 healthy subjects. The results indicate that the average accuracy rate was 94.92% for a model with 5.0k parameters and a size of 0.026MB. Specifically, the average recognition accuracy for static and force-level gestures was 98.47% and 89.92%, respectively. The proposed hardware accelerator architecture was deployed with 8-bit precision, a single-frame signal inference time of 41.9μs, a power consumption of 0.317W, and a data throughput of 78.6 GOP/s.