{"title":"Mel Frequency Cepstral Coefficients Enhance Imagined Speech Decoding Accuracy from EEG","authors":"Ciaran Cooney, R. Folli, D. Coyle","doi":"10.1109/ISSC.2018.8585291","DOIUrl":null,"url":null,"abstract":"Imagined speech has recently become an important neuro-paradigm in the field of brain-computer interface (BCI) research. Electroencephalogram (EEG) recordings during imagined speech production are difficult to decode accurately, due to factors such as weak neural correlates and spatial specificity, and signal noise during the recording process. In this study, a dataset of imagined speech recordings obtained during production of eleven different units of imagined speech is used to investigate the relative effects of different features on classification accuracy. Three distinct feature-sets are computed from the data: a linear feature-set, a non-linear feature-set, and a feature-set comprised only of mel frequency cepstral coefficients (MFCC). Each feature-set is used to train a decision tree classifier and a Support Vector Machine classifier. The results indicate that the use of MFCC features provides greater discrimination of imagined speech EEG recordings in comparison with the other features evaluated, and that phonological differences between imagined words can serve as an aid to classification.","PeriodicalId":174854,"journal":{"name":"2018 29th Irish Signals and Systems Conference (ISSC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 29th Irish Signals and Systems Conference (ISSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSC.2018.8585291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32
Abstract
Imagined speech has recently become an important neuro-paradigm in the field of brain-computer interface (BCI) research. Electroencephalogram (EEG) recordings during imagined speech production are difficult to decode accurately, due to factors such as weak neural correlates and spatial specificity, and signal noise during the recording process. In this study, a dataset of imagined speech recordings obtained during production of eleven different units of imagined speech is used to investigate the relative effects of different features on classification accuracy. Three distinct feature-sets are computed from the data: a linear feature-set, a non-linear feature-set, and a feature-set comprised only of mel frequency cepstral coefficients (MFCC). Each feature-set is used to train a decision tree classifier and a Support Vector Machine classifier. The results indicate that the use of MFCC features provides greater discrimination of imagined speech EEG recordings in comparison with the other features evaluated, and that phonological differences between imagined words can serve as an aid to classification.