基于脑机接口原型的感觉音频聚焦检测

Ryan Villanueva, Brandon Hoang, Urmil Shah, Yazmin Martinez, K. George
{"title":"基于脑机接口原型的感觉音频聚焦检测","authors":"Ryan Villanueva, Brandon Hoang, Urmil Shah, Yazmin Martinez, K. George","doi":"10.1109/CogMI48466.2019.00022","DOIUrl":null,"url":null,"abstract":"Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.","PeriodicalId":116160,"journal":{"name":"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sensory Audio Focusing Detection Using Brain-Computer Interface Archetype\",\"authors\":\"Ryan Villanueva, Brandon Hoang, Urmil Shah, Yazmin Martinez, K. George\",\"doi\":\"10.1109/CogMI48466.2019.00022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.\",\"PeriodicalId\":116160,\"journal\":{\"name\":\"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CogMI48466.2019.00022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CogMI48466.2019.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

日常生活中,人们置身于无数对话同时发生的环境中。对于大多数使用助听器的听力受损听众来说,在多人说话的情况下,语音清晰度会显著降低,这通常被称为“鸡尾酒会现象”[1]。先前针对这一问题的研究包括基于多个移动说话者轨迹的噪声滤波和基于人脸检测的说话目标位置[2][3]。本研究的重点是利用脑机接口(BCI)系统测量脑电图(EEG)信号来实现音频滤波的实用性。这项研究探索了使用机器学习算法来分类听者关注的说话者。在本研究中,获得听者专注于一种听觉刺激(有声读物)的训练数据,同时呈现其他听觉刺激。使用g.Nautilus脑机接口耳机获取脑电数据。在收集每个音频源的试用数据后,机器学习算法训练分类器来区分不同的有声书。每项试验收集5名受试者的数据。结果表明,三个实验的准确度都在90%以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sensory Audio Focusing Detection Using Brain-Computer Interface Archetype
Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信