{"title":"A Deep Adaptive Framework for Robust Myoelectric Hand Movement Prediction","authors":"Carl Peter Robinson, Baihua Li, Q. Meng, M. Pain","doi":"10.31256/UKRAS19.1","DOIUrl":null,"url":null,"abstract":"This work explored the requirements of accurately\nand reliably predicting user intention using a deep learning\nmethodology when performing fine-grained movements of the\nhuman hand. The focus was on combining a feature engineering\nprocess with the effective capability of deep learning to further\nidentify salient characteristics from a biological input signal. 3\ntime domain features (root mean square, waveform length, and\nslope sign changes) were extracted from the surface\nelectromyography (sEMG) signal of 17 hand and wrist\nmovements performed by 40 subjects. The feature data was\nmapped to 6 sensor bend resistance readings from a CyberGlove\nII system, representing the associated hand kinematic data.\nThese sensors were located at specific joints of interest on the\nhuman hand (the thumb’s metacarpophalangeal joint, the\nproximal interphalangeal joint of each finger, and the\nradiocarpal joint of the wrist). All datasets were taken from\ndatabase 2 of the NinaPro online database repository. A 3-layer\nlong short-term memory model with dropout was developed to\npredict the 6 glove sensor readings using a corresponding sEMG\nfeature vector as input. Initial results from trials using test data\nfrom the 40 subjects produce an average mean squared error of\n0.176. This indicates a viable pathway to follow for this\nprediction method of hand movement data, although further\nwork is needed to optimize the model and to analyze the data with\na more detailed set of metrics.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31256/UKRAS19.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This work explored the requirements of accurately
and reliably predicting user intention using a deep learning
methodology when performing fine-grained movements of the
human hand. The focus was on combining a feature engineering
process with the effective capability of deep learning to further
identify salient characteristics from a biological input signal. 3
time domain features (root mean square, waveform length, and
slope sign changes) were extracted from the surface
electromyography (sEMG) signal of 17 hand and wrist
movements performed by 40 subjects. The feature data was
mapped to 6 sensor bend resistance readings from a CyberGlove
II system, representing the associated hand kinematic data.
These sensors were located at specific joints of interest on the
human hand (the thumb’s metacarpophalangeal joint, the
proximal interphalangeal joint of each finger, and the
radiocarpal joint of the wrist). All datasets were taken from
database 2 of the NinaPro online database repository. A 3-layer
long short-term memory model with dropout was developed to
predict the 6 glove sensor readings using a corresponding sEMG
feature vector as input. Initial results from trials using test data
from the 40 subjects produce an average mean squared error of
0.176. This indicates a viable pathway to follow for this
prediction method of hand movement data, although further
work is needed to optimize the model and to analyze the data with
a more detailed set of metrics.