{"title":"Decoding of Imagined Speech Neural EEG Signals Using Deep Reinforcement Learning Technique","authors":"Nrushingh Charan Mahapatra, Prachet Bhuyan","doi":"10.1109/ASSIC55218.2022.10088387","DOIUrl":null,"url":null,"abstract":"The basic objective of the study is to establish the reinforcement learning technique in the decoding of imagined speech neural signals. The purpose of imagined speech neural computational studies is to give people who are unable to communicate due to physical or neurological limitations of speech generation alternative natural communication pathways. The advanced human-computer interface based on imagined speech decoding based on measurable neural activity could enable natural interactions and significantly improve quality of life, especially for people with few communication alternatives. Recent advances in signal processing and reinforcement learning based on deep learning algorithms have enabled high-quality imagined speech decoding from noninvasively recorded neural activity. Most of the prior research focused on the supervised classification of collected signals, with no naturalistic feedback-based training of imagined speech models for brain-computer interfaces. We employ deep reinforcement learning in this study to create an imagined speech decoder artificial agent based on the deep Q-network (DQN), so that the artificial agent could indeed learn effective policies directly from multidimensional neural electroencephalography (EEG) signal inputs adopting end-to-end reinforcement learning. We show that the artificial agent, supplied only with neural signals and rewards as inputs, was able to decode the imagined speech neural signals efficiently with 81.6947% overall accuracy.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASSIC55218.2022.10088387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The basic objective of the study is to establish the reinforcement learning technique in the decoding of imagined speech neural signals. The purpose of imagined speech neural computational studies is to give people who are unable to communicate due to physical or neurological limitations of speech generation alternative natural communication pathways. The advanced human-computer interface based on imagined speech decoding based on measurable neural activity could enable natural interactions and significantly improve quality of life, especially for people with few communication alternatives. Recent advances in signal processing and reinforcement learning based on deep learning algorithms have enabled high-quality imagined speech decoding from noninvasively recorded neural activity. Most of the prior research focused on the supervised classification of collected signals, with no naturalistic feedback-based training of imagined speech models for brain-computer interfaces. We employ deep reinforcement learning in this study to create an imagined speech decoder artificial agent based on the deep Q-network (DQN), so that the artificial agent could indeed learn effective policies directly from multidimensional neural electroencephalography (EEG) signal inputs adopting end-to-end reinforcement learning. We show that the artificial agent, supplied only with neural signals and rewards as inputs, was able to decode the imagined speech neural signals efficiently with 81.6947% overall accuracy.