{"title":"核特征空间学习套件增强了基于smr的EEG-BCI分类","authors":"B. Abibullaev","doi":"10.1109/IWW-BCI.2017.7858158","DOIUrl":null,"url":null,"abstract":"Brain-Computer Interface (BCI) research hopes to improve the quality of life for people with severe motor disabilities by providing a capability to control external devices using their thoughts. To control a device through BCI, neural signals of a user must be translated to meaningful control commands using various machine learning components, e.g. feature extraction, dimensionality reduction and classification, that should also be carefully designed for practical use. However, the noise and variability in the neural data pose one of the greatest challenges that in practice previously functioning BCI fails in the subsequent operation requiring re-tuning/optimization. This paper presents an idea of defining multiple feature spaces and optimal decision boundaries therein to account for noise and variability in data and improve a generalization of a learning machine. The spaces are defined in the Reproducing Kernel Hilbert Spaces induced by a Radial Basis Gaussian function. Then the learning is done via L1-regularized Support Vector Machines. The central idea behind our approach is that a classifier predicts an unseen test examples by learning more rich feature spaces with a suite of optimal hyperparameters. Empirical evaluation have shown an improved generalization performance (range 79–90%) on two class motor imagery Electroencephalography (EEG) data, when compared with other conventional machine learning methods.","PeriodicalId":443427,"journal":{"name":"2017 5th International Winter Conference on Brain-Computer Interface (BCI)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Learning suite of kernel feature spaces enhances SMR-based EEG-BCI classification\",\"authors\":\"B. Abibullaev\",\"doi\":\"10.1109/IWW-BCI.2017.7858158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain-Computer Interface (BCI) research hopes to improve the quality of life for people with severe motor disabilities by providing a capability to control external devices using their thoughts. To control a device through BCI, neural signals of a user must be translated to meaningful control commands using various machine learning components, e.g. feature extraction, dimensionality reduction and classification, that should also be carefully designed for practical use. However, the noise and variability in the neural data pose one of the greatest challenges that in practice previously functioning BCI fails in the subsequent operation requiring re-tuning/optimization. This paper presents an idea of defining multiple feature spaces and optimal decision boundaries therein to account for noise and variability in data and improve a generalization of a learning machine. The spaces are defined in the Reproducing Kernel Hilbert Spaces induced by a Radial Basis Gaussian function. Then the learning is done via L1-regularized Support Vector Machines. The central idea behind our approach is that a classifier predicts an unseen test examples by learning more rich feature spaces with a suite of optimal hyperparameters. Empirical evaluation have shown an improved generalization performance (range 79–90%) on two class motor imagery Electroencephalography (EEG) data, when compared with other conventional machine learning methods.\",\"PeriodicalId\":443427,\"journal\":{\"name\":\"2017 5th International Winter Conference on Brain-Computer Interface (BCI)\",\"volume\":\"131 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 5th International Winter Conference on Brain-Computer Interface (BCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IWW-BCI.2017.7858158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 5th International Winter Conference on Brain-Computer Interface (BCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWW-BCI.2017.7858158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning suite of kernel feature spaces enhances SMR-based EEG-BCI classification
Brain-Computer Interface (BCI) research hopes to improve the quality of life for people with severe motor disabilities by providing a capability to control external devices using their thoughts. To control a device through BCI, neural signals of a user must be translated to meaningful control commands using various machine learning components, e.g. feature extraction, dimensionality reduction and classification, that should also be carefully designed for practical use. However, the noise and variability in the neural data pose one of the greatest challenges that in practice previously functioning BCI fails in the subsequent operation requiring re-tuning/optimization. This paper presents an idea of defining multiple feature spaces and optimal decision boundaries therein to account for noise and variability in data and improve a generalization of a learning machine. The spaces are defined in the Reproducing Kernel Hilbert Spaces induced by a Radial Basis Gaussian function. Then the learning is done via L1-regularized Support Vector Machines. The central idea behind our approach is that a classifier predicts an unseen test examples by learning more rich feature spaces with a suite of optimal hyperparameters. Empirical evaluation have shown an improved generalization performance (range 79–90%) on two class motor imagery Electroencephalography (EEG) data, when compared with other conventional machine learning methods.