Ryan Villanueva, Brandon Hoang, Urmil Shah, Yazmin Martinez, K. George
{"title":"基于脑机接口原型的感觉音频聚焦检测","authors":"Ryan Villanueva, Brandon Hoang, Urmil Shah, Yazmin Martinez, K. George","doi":"10.1109/CogMI48466.2019.00022","DOIUrl":null,"url":null,"abstract":"Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.","PeriodicalId":116160,"journal":{"name":"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sensory Audio Focusing Detection Using Brain-Computer Interface Archetype\",\"authors\":\"Ryan Villanueva, Brandon Hoang, Urmil Shah, Yazmin Martinez, K. George\",\"doi\":\"10.1109/CogMI48466.2019.00022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.\",\"PeriodicalId\":116160,\"journal\":{\"name\":\"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CogMI48466.2019.00022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CogMI48466.2019.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sensory Audio Focusing Detection Using Brain-Computer Interface Archetype
Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.