Yunfei Guo, Wenda Xu, Sarthak Pradhan, Cesar Bravo, Pinhas Ben-Tzvi
{"title":"INTEGRATED AND CONFIGURABLE VOICE ACTIVATION AND SPEAKER VERIFICATION SYSTEM FOR A ROBOTIC EXOSKELETON GLOVE.","authors":"Yunfei Guo, Wenda Xu, Sarthak Pradhan, Cesar Bravo, Pinhas Ben-Tzvi","doi":"10.1115/detc2020-22365","DOIUrl":null,"url":null,"abstract":"<p><p>Efficient human-machine interface (HMI) for exoskeletons remains an active research topic, where sample methods have been proposed including using computer vision, EEG (electroencephalogram), and voice recognition. However, some of these methods lack sufficient accuracy, security, and portability. This paper proposes a HMI referred as integrated trigger-word configurable voice activation and speaker verification system (CVASV). The CVASV system is designed for embedded systems with limited computing power that can be applied to any exoskeleton platform. The CVASV system consists of two main sections, including an API based voice activation section and a deep learning based text-independent voice verification section. These two sections are combined into a system that allows the user to configure the activation trigger-word and verify the user's command in real-time.</p>","PeriodicalId":74514,"journal":{"name":"Proceedings of the ... ASME Design Engineering Technical Conferences. ASME Design Engineering Technical Conferences","volume":"10 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9726174/pdf/nihms-1854345.pdf","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... ASME Design Engineering Technical Conferences. ASME Design Engineering Technical Conferences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2020-22365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Efficient human-machine interface (HMI) for exoskeletons remains an active research topic, where sample methods have been proposed including using computer vision, EEG (electroencephalogram), and voice recognition. However, some of these methods lack sufficient accuracy, security, and portability. This paper proposes a HMI referred as integrated trigger-word configurable voice activation and speaker verification system (CVASV). The CVASV system is designed for embedded systems with limited computing power that can be applied to any exoskeleton platform. The CVASV system consists of two main sections, including an API based voice activation section and a deep learning based text-independent voice verification section. These two sections are combined into a system that allows the user to configure the activation trigger-word and verify the user's command in real-time.