Decoding of Imagined Speech Neural EEG Signals Using Deep Reinforcement Learning Technique

Nrushingh Charan Mahapatra, Prachet Bhuyan
{"title":"Decoding of Imagined Speech Neural EEG Signals Using Deep Reinforcement Learning Technique","authors":"Nrushingh Charan Mahapatra, Prachet Bhuyan","doi":"10.1109/ASSIC55218.2022.10088387","DOIUrl":null,"url":null,"abstract":"The basic objective of the study is to establish the reinforcement learning technique in the decoding of imagined speech neural signals. The purpose of imagined speech neural computational studies is to give people who are unable to communicate due to physical or neurological limitations of speech generation alternative natural communication pathways. The advanced human-computer interface based on imagined speech decoding based on measurable neural activity could enable natural interactions and significantly improve quality of life, especially for people with few communication alternatives. Recent advances in signal processing and reinforcement learning based on deep learning algorithms have enabled high-quality imagined speech decoding from noninvasively recorded neural activity. Most of the prior research focused on the supervised classification of collected signals, with no naturalistic feedback-based training of imagined speech models for brain-computer interfaces. We employ deep reinforcement learning in this study to create an imagined speech decoder artificial agent based on the deep Q-network (DQN), so that the artificial agent could indeed learn effective policies directly from multidimensional neural electroencephalography (EEG) signal inputs adopting end-to-end reinforcement learning. We show that the artificial agent, supplied only with neural signals and rewards as inputs, was able to decode the imagined speech neural signals efficiently with 81.6947% overall accuracy.","PeriodicalId":441406,"journal":{"name":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASSIC55218.2022.10088387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The basic objective of the study is to establish the reinforcement learning technique in the decoding of imagined speech neural signals. The purpose of imagined speech neural computational studies is to give people who are unable to communicate due to physical or neurological limitations of speech generation alternative natural communication pathways. The advanced human-computer interface based on imagined speech decoding based on measurable neural activity could enable natural interactions and significantly improve quality of life, especially for people with few communication alternatives. Recent advances in signal processing and reinforcement learning based on deep learning algorithms have enabled high-quality imagined speech decoding from noninvasively recorded neural activity. Most of the prior research focused on the supervised classification of collected signals, with no naturalistic feedback-based training of imagined speech models for brain-computer interfaces. We employ deep reinforcement learning in this study to create an imagined speech decoder artificial agent based on the deep Q-network (DQN), so that the artificial agent could indeed learn effective policies directly from multidimensional neural electroencephalography (EEG) signal inputs adopting end-to-end reinforcement learning. We show that the artificial agent, supplied only with neural signals and rewards as inputs, was able to decode the imagined speech neural signals efficiently with 81.6947% overall accuracy.
基于深度强化学习技术的想象语音脑电信号解码
本研究的基本目的是建立一种用于想象语音神经信号解码的强化学习技术。想象语音神经计算研究的目的是给那些由于语音产生的身体或神经限制而无法交流的人提供替代的自然交流途径。基于可测量的神经活动的想象语音解码的先进人机界面可以实现自然交互,显着提高生活质量,特别是对于很少有交流选择的人。基于深度学习算法的信号处理和强化学习的最新进展使得从非侵入性记录的神经活动中解码高质量的想象语音成为可能。先前的研究大多集中在对收集到的信号进行监督分类,缺乏基于自然反馈的脑机接口想象语音模型的训练。本研究采用深度强化学习的方法,基于深度q网络(deep Q-network, DQN)构建了一个想象的语音解码器人工智能体,使人工智能体能够通过端到端的强化学习,直接从多维脑电图(EEG)信号输入中学习有效的策略。研究表明,人工智能体在只提供神经信号和奖励作为输入的情况下,能够有效地解码想象的语音神经信号,总准确率为81.6947%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信