Triwiyanto Triwiyanto, Endro Yulianto, Sari Luthfiyah, S. D. Musvika, Anita Miftahul Maghfiroh, M. R. Mak'ruf, D. Titisari, S.B. Ichwan
{"title":"基于嵌入式机器学习的Raspberry Pi语音识别手骨架开发","authors":"Triwiyanto Triwiyanto, Endro Yulianto, Sari Luthfiyah, S. D. Musvika, Anita Miftahul Maghfiroh, M. R. Mak'ruf, D. Titisari, S.B. Ichwan","doi":"10.4028/p-ghjg94","DOIUrl":null,"url":null,"abstract":"The choice of using speech to control the exoskeleton is based on the number of exoskeletons that are controlled using the EMG signal, where the EMG signal itself has the weakness of the complexity of the signal which is influenced by the position of the electrodes, as well as muscle fatigue. The purpose of this research is to develop an exoskeleton device using voice control based on embedded machine learning on a Raspberry Pi minicomputer. In this study, two feature extraction types namely mel-frequency cepstral coefficient (MFCC) and zero-crossing (ZC), and two machine learning algorithms, namely K-nearest Neighbor (K-NN) and Decision Tree (DT) were evaluated. The hand exoskeleton development consists of 3D hand design, microphone, Raspberry Pi 4B+, PCA9685 servo driver, and servo motor. Microphone was used to record voice commands given. After model evaluation, it was found that the MFCC extraction combined with the K-NN algorithm and the best accuracy (96±7.0%). In the implementation, we found that the accuracy is 79±14.46% and 90±14.14% for open and close commands.","PeriodicalId":15161,"journal":{"name":"Journal of Biomimetics, Biomaterials and Biomedical Engineering","volume":"55 1","pages":"81 - 92"},"PeriodicalIF":0.5000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Hand Exoskeleton Development Based on Voice Recognition Using Embedded Machine Learning on Raspberry Pi\",\"authors\":\"Triwiyanto Triwiyanto, Endro Yulianto, Sari Luthfiyah, S. D. Musvika, Anita Miftahul Maghfiroh, M. R. Mak'ruf, D. Titisari, S.B. Ichwan\",\"doi\":\"10.4028/p-ghjg94\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The choice of using speech to control the exoskeleton is based on the number of exoskeletons that are controlled using the EMG signal, where the EMG signal itself has the weakness of the complexity of the signal which is influenced by the position of the electrodes, as well as muscle fatigue. The purpose of this research is to develop an exoskeleton device using voice control based on embedded machine learning on a Raspberry Pi minicomputer. In this study, two feature extraction types namely mel-frequency cepstral coefficient (MFCC) and zero-crossing (ZC), and two machine learning algorithms, namely K-nearest Neighbor (K-NN) and Decision Tree (DT) were evaluated. The hand exoskeleton development consists of 3D hand design, microphone, Raspberry Pi 4B+, PCA9685 servo driver, and servo motor. Microphone was used to record voice commands given. After model evaluation, it was found that the MFCC extraction combined with the K-NN algorithm and the best accuracy (96±7.0%). In the implementation, we found that the accuracy is 79±14.46% and 90±14.14% for open and close commands.\",\"PeriodicalId\":15161,\"journal\":{\"name\":\"Journal of Biomimetics, Biomaterials and Biomedical Engineering\",\"volume\":\"55 1\",\"pages\":\"81 - 92\"},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2022-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Biomimetics, Biomaterials and Biomedical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4028/p-ghjg94\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomimetics, Biomaterials and Biomedical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4028/p-ghjg94","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Hand Exoskeleton Development Based on Voice Recognition Using Embedded Machine Learning on Raspberry Pi
The choice of using speech to control the exoskeleton is based on the number of exoskeletons that are controlled using the EMG signal, where the EMG signal itself has the weakness of the complexity of the signal which is influenced by the position of the electrodes, as well as muscle fatigue. The purpose of this research is to develop an exoskeleton device using voice control based on embedded machine learning on a Raspberry Pi minicomputer. In this study, two feature extraction types namely mel-frequency cepstral coefficient (MFCC) and zero-crossing (ZC), and two machine learning algorithms, namely K-nearest Neighbor (K-NN) and Decision Tree (DT) were evaluated. The hand exoskeleton development consists of 3D hand design, microphone, Raspberry Pi 4B+, PCA9685 servo driver, and servo motor. Microphone was used to record voice commands given. After model evaluation, it was found that the MFCC extraction combined with the K-NN algorithm and the best accuracy (96±7.0%). In the implementation, we found that the accuracy is 79±14.46% and 90±14.14% for open and close commands.