Assistive Multimodal Wearable for Open Air Digit Recognition Using Machine Learning

John M. Rattray, Maxwell Ujhazy, Robert Stevens, Ralph Etienne-Cummings
{"title":"Assistive Multimodal Wearable for Open Air Digit Recognition Using Machine Learning","authors":"John M. Rattray, Maxwell Ujhazy, Robert Stevens, Ralph Etienne-Cummings","doi":"10.1109/NER52421.2023.10123870","DOIUrl":null,"url":null,"abstract":"To increase access to digital systems for populations suffering from upper limb motor impairment we present an assistive wearable device to capture gestures performed in air. These open air gestures provide an interface for users who are unable to exhibit the fine motor control needed for standardized human computer interfaces utilizing miniature button input such as keyboards and keypads. By capturing the motion performed at the wrist by an accelerometer as well as the muscle activation signatures using surface electromyography, we improve the classification accuracy as compared to using either modality alone. Twelve features were extracted from the multimodal time series data in both the time and frequency domain and used as input to a collection 4 machine learning models for classification, Fine Tree, K-Nearest Neighbor, Support Vector Machine, and Artificial Neural Network. One subject performed the task of writing single digits in free space and after post-processing and feature extraction we achieved a classification accuracy of 96.2% for binary discrimination of digits zero and one using a support vector machine model and an accuracy of 71% when classifying all 10 digits using an artificial neural network. Our findings indicate the feasibility of a wearable multimodal human computer interface to relieve the burden conventional interfaces present to motor impaired users.","PeriodicalId":201841,"journal":{"name":"2023 11th International IEEE/EMBS Conference on Neural Engineering (NER)","volume":"23 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 11th International IEEE/EMBS Conference on Neural Engineering (NER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NER52421.2023.10123870","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

To increase access to digital systems for populations suffering from upper limb motor impairment we present an assistive wearable device to capture gestures performed in air. These open air gestures provide an interface for users who are unable to exhibit the fine motor control needed for standardized human computer interfaces utilizing miniature button input such as keyboards and keypads. By capturing the motion performed at the wrist by an accelerometer as well as the muscle activation signatures using surface electromyography, we improve the classification accuracy as compared to using either modality alone. Twelve features were extracted from the multimodal time series data in both the time and frequency domain and used as input to a collection 4 machine learning models for classification, Fine Tree, K-Nearest Neighbor, Support Vector Machine, and Artificial Neural Network. One subject performed the task of writing single digits in free space and after post-processing and feature extraction we achieved a classification accuracy of 96.2% for binary discrimination of digits zero and one using a support vector machine model and an accuracy of 71% when classifying all 10 digits using an artificial neural network. Our findings indicate the feasibility of a wearable multimodal human computer interface to relieve the burden conventional interfaces present to motor impaired users.
基于机器学习的开放式数字识别辅助多模态可穿戴设备
为了增加上肢运动障碍患者使用数字系统的机会,我们提出了一种辅助可穿戴设备来捕捉在空中进行的手势。这些开放的手势为用户提供了一个界面,他们无法展示精细的运动控制,需要标准化的人机界面,利用微型按钮输入,如键盘和小键盘。通过使用加速计捕捉手腕上的运动以及使用表面肌电图捕捉肌肉激活特征,与单独使用任何一种方式相比,我们提高了分类准确性。从多模态时间序列数据中提取时域和频域的12个特征,并将其作为4个机器学习模型的输入,分别用于分类、精细树、k近邻、支持向量机和人工神经网络。其中一名受试者在自由空间中书写个位数,经过后处理和特征提取,我们使用支持向量机模型对数字0和1进行二进制区分的分类准确率为96.2%,使用人工神经网络对全部10位数字进行分类的准确率为71%。我们的研究结果表明,一种可穿戴的多模态人机界面可以减轻传统界面给运动障碍用户带来的负担。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信