{"title":"Vowel classification from imagined speech using sub-band EEG frequencies and deep belief networks","authors":"R. Anandha Sree, A. Kavitha","doi":"10.1109/ICSCN.2017.8085710","DOIUrl":null,"url":null,"abstract":"This work has focused on the possibilities of classifying vowels ‘a’, ‘e’, ‘i’, ‘o’, ‘u’ from EEG signals, that has been derived while imagining the vowels, with minimum input features. The EEG signals have been acquired from 5 subjects while imagining and uttering the vowels during a well defined experimental protocol, have been processed and segmented using established signal processing routines. The signals have been segmented under various sub-band frequencies and subjected to Db4 Discrete Wavelet Transform. The various conventional and derived energy based features have been acquired from the sub-band frequency signals, trained and tested using Deep Belief Networks for classifying the imagined vowels. The experiments have been repeated on various electrode combinations. Results obtained from all sub-band frequency based features have shown a good classification accuracy. Further, classification protocol employing features that have been derived from each sub-band frequency has shown that the theta and gamma band frequency features have been more effective with a vowel classification accuracy ranging between 75–100%.","PeriodicalId":383458,"journal":{"name":"2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSCN.2017.8085710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
This work has focused on the possibilities of classifying vowels ‘a’, ‘e’, ‘i’, ‘o’, ‘u’ from EEG signals, that has been derived while imagining the vowels, with minimum input features. The EEG signals have been acquired from 5 subjects while imagining and uttering the vowels during a well defined experimental protocol, have been processed and segmented using established signal processing routines. The signals have been segmented under various sub-band frequencies and subjected to Db4 Discrete Wavelet Transform. The various conventional and derived energy based features have been acquired from the sub-band frequency signals, trained and tested using Deep Belief Networks for classifying the imagined vowels. The experiments have been repeated on various electrode combinations. Results obtained from all sub-band frequency based features have shown a good classification accuracy. Further, classification protocol employing features that have been derived from each sub-band frequency has shown that the theta and gamma band frequency features have been more effective with a vowel classification accuracy ranging between 75–100%.