Muhammad Hamza Zafar;Syed Kumayl Raza Moosavi;Filippo Sanfilippo
{"title":"基于emg的实时人机交互手势识别的联邦学习增强边缘深度学习模型","authors":"Muhammad Hamza Zafar;Syed Kumayl Raza Moosavi;Filippo Sanfilippo","doi":"10.1109/JSEN.2025.3529841","DOIUrl":null,"url":null,"abstract":"Electromyography (EMG)-based gesture detection plays a crucial role in human-robot interaction (HRI), providing a seamless interface for controlling robotic systems through muscle activity. Despite its potential, EMG systems face significant challenges related to the security and privacy of sensitive biometric data, as well as the computational limitations of deploying deep learning (DL) models on edge devices. To address these issues, we propose a federated learning (FL)-based DL model for EMG gesture recognition, specifically designed for edge devices. Our model utilizes a custom dataset collected using a Mindrove eight-channel EMG armband, capturing eight distinct hand gestures—rest, move left, move right, move down, move up, open fingers, close fist, and twist hand—from ten subjects with seven repetitions each, ensuring diverse and robust data for training. During preprocessing, a bandpass filter (50–450 Hz) was applied to remove noise and enhance signal quality, followed by a short-term frequency transform (STFT) with a 200-ms sample time and 50% overlap to extract relevant features from the EMG signals. The dataset was segmented into training and testing sets with a 70/30 split for evaluation. We evaluate several FL techniques, including FedAvg, FedProx, and FedSGD, demonstrating that FedAvg achieves the highest accuracy of 96.92% without quantization with Scenario 9 (15 epochs, 20 rounds) with minimal communication overhead. Additionally, our model is quantized, resulting in an 89% reduction in size and a high accuracy of 95.99%, representing a minimal loss of 0.93%, making it ideal for edge deployment without compromising performance. A comparative analysis with other DL models, such as multiconvolutional residual networks (MCRNs), multiconvolutional neural networks (MCNNs), temporal convolutional networks (TCNs), and InceptionNet, shows that our approach outperforms these models in both accuracy and efficiency. Experimental results validate the high accuracy of our model in both training/testing and real-time disaster scenario simulations using the Spot robot. The proposed solution provides a secure, efficient, and highly accurate framework for EMG-based gesture recognition on edge devices, ideal for HRI and assistive technologies such as in search and rescue operations.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 5","pages":"9139-9151"},"PeriodicalIF":4.3000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Learning-Enhanced Edge Deep Learning Model for EMG-Based Gesture Recognition in Real-Time Human–Robot Interaction\",\"authors\":\"Muhammad Hamza Zafar;Syed Kumayl Raza Moosavi;Filippo Sanfilippo\",\"doi\":\"10.1109/JSEN.2025.3529841\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electromyography (EMG)-based gesture detection plays a crucial role in human-robot interaction (HRI), providing a seamless interface for controlling robotic systems through muscle activity. Despite its potential, EMG systems face significant challenges related to the security and privacy of sensitive biometric data, as well as the computational limitations of deploying deep learning (DL) models on edge devices. To address these issues, we propose a federated learning (FL)-based DL model for EMG gesture recognition, specifically designed for edge devices. Our model utilizes a custom dataset collected using a Mindrove eight-channel EMG armband, capturing eight distinct hand gestures—rest, move left, move right, move down, move up, open fingers, close fist, and twist hand—from ten subjects with seven repetitions each, ensuring diverse and robust data for training. During preprocessing, a bandpass filter (50–450 Hz) was applied to remove noise and enhance signal quality, followed by a short-term frequency transform (STFT) with a 200-ms sample time and 50% overlap to extract relevant features from the EMG signals. The dataset was segmented into training and testing sets with a 70/30 split for evaluation. We evaluate several FL techniques, including FedAvg, FedProx, and FedSGD, demonstrating that FedAvg achieves the highest accuracy of 96.92% without quantization with Scenario 9 (15 epochs, 20 rounds) with minimal communication overhead. Additionally, our model is quantized, resulting in an 89% reduction in size and a high accuracy of 95.99%, representing a minimal loss of 0.93%, making it ideal for edge deployment without compromising performance. A comparative analysis with other DL models, such as multiconvolutional residual networks (MCRNs), multiconvolutional neural networks (MCNNs), temporal convolutional networks (TCNs), and InceptionNet, shows that our approach outperforms these models in both accuracy and efficiency. Experimental results validate the high accuracy of our model in both training/testing and real-time disaster scenario simulations using the Spot robot. The proposed solution provides a secure, efficient, and highly accurate framework for EMG-based gesture recognition on edge devices, ideal for HRI and assistive technologies such as in search and rescue operations.\",\"PeriodicalId\":447,\"journal\":{\"name\":\"IEEE Sensors Journal\",\"volume\":\"25 5\",\"pages\":\"9139-9151\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-01-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Sensors Journal\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10847750/\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/10847750/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Federated Learning-Enhanced Edge Deep Learning Model for EMG-Based Gesture Recognition in Real-Time Human–Robot Interaction
Electromyography (EMG)-based gesture detection plays a crucial role in human-robot interaction (HRI), providing a seamless interface for controlling robotic systems through muscle activity. Despite its potential, EMG systems face significant challenges related to the security and privacy of sensitive biometric data, as well as the computational limitations of deploying deep learning (DL) models on edge devices. To address these issues, we propose a federated learning (FL)-based DL model for EMG gesture recognition, specifically designed for edge devices. Our model utilizes a custom dataset collected using a Mindrove eight-channel EMG armband, capturing eight distinct hand gestures—rest, move left, move right, move down, move up, open fingers, close fist, and twist hand—from ten subjects with seven repetitions each, ensuring diverse and robust data for training. During preprocessing, a bandpass filter (50–450 Hz) was applied to remove noise and enhance signal quality, followed by a short-term frequency transform (STFT) with a 200-ms sample time and 50% overlap to extract relevant features from the EMG signals. The dataset was segmented into training and testing sets with a 70/30 split for evaluation. We evaluate several FL techniques, including FedAvg, FedProx, and FedSGD, demonstrating that FedAvg achieves the highest accuracy of 96.92% without quantization with Scenario 9 (15 epochs, 20 rounds) with minimal communication overhead. Additionally, our model is quantized, resulting in an 89% reduction in size and a high accuracy of 95.99%, representing a minimal loss of 0.93%, making it ideal for edge deployment without compromising performance. A comparative analysis with other DL models, such as multiconvolutional residual networks (MCRNs), multiconvolutional neural networks (MCNNs), temporal convolutional networks (TCNs), and InceptionNet, shows that our approach outperforms these models in both accuracy and efficiency. Experimental results validate the high accuracy of our model in both training/testing and real-time disaster scenario simulations using the Spot robot. The proposed solution provides a secure, efficient, and highly accurate framework for EMG-based gesture recognition on edge devices, ideal for HRI and assistive technologies such as in search and rescue operations.
期刊介绍:
The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following:
-Sensor Phenomenology, Modelling, and Evaluation
-Sensor Materials, Processing, and Fabrication
-Chemical and Gas Sensors
-Microfluidics and Biosensors
-Optical Sensors
-Physical Sensors: Temperature, Mechanical, Magnetic, and others
-Acoustic and Ultrasonic Sensors
-Sensor Packaging
-Sensor Networks
-Sensor Applications
-Sensor Systems: Signals, Processing, and Interfaces
-Actuators and Sensor Power Systems
-Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting
-Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data)
-Sensors in Industrial Practice