{"title":"Determining Gaze Information from Steady-State Visually-Evoked Potentials","authors":"Ebru Sayilgan, Y. Yuce, Y. Isler","doi":"10.7212/zkufbd.v10i2.1588","DOIUrl":null,"url":null,"abstract":"Brain-Computer Interface (BCI) is a communication system that enables individuals who lack control and use of their existing muscular and nervous systems to interact with the outside world because of various reasons. A BCI enables its user to communicate with some electronic devices by processing signals generated during brain activities. This study attempts to detect and collect gaze data within Electroencephalogram (EEG) signals through classification. To this purpose, three datasets comprised of EEG signals recorded by researchers from the Autonomous University were adopted. The EEG signals in these datasets were collected in a setting where subjects’ gaze into five boxes shown on a computer screen was recognized through Steady-State Visually Evoked Potential based BCI. The classification was performed using algorithms of Naive Bayes, Extreme Learning Machine, and Support Vector Machines. Three feature sets; Autoregressive, Hjorth, and Power Spectral Density, were extracted from EEG signals. As a result, using Autoregressive features, classifiers performed between 45.67% and 78.34%, whereas for Hjorth their classification performance was within 43.34-75.25%, and finally, by using Power Spectral Density their classification performance was between 57.36% and 83.42% Furthermore, classifier performances using Naive Bayes varied between 52.23% and 79.15% for Naive Bayes, 56.32-83.42% for Extreme Learning Machine, and 43.34-72.27% for Support Vector Machines by regarding classification algorithms. Among achieved accuracy performances, the best accuracy is 83.42%, achieved by the Power Spectral Density features and Extreme Learning Machine algorithm pair.","PeriodicalId":17742,"journal":{"name":"Karaelmas Science and Engineering Journal","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Karaelmas Science and Engineering Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7212/zkufbd.v10i2.1588","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Brain-Computer Interface (BCI) is a communication system that enables individuals who lack control and use of their existing muscular and nervous systems to interact with the outside world because of various reasons. A BCI enables its user to communicate with some electronic devices by processing signals generated during brain activities. This study attempts to detect and collect gaze data within Electroencephalogram (EEG) signals through classification. To this purpose, three datasets comprised of EEG signals recorded by researchers from the Autonomous University were adopted. The EEG signals in these datasets were collected in a setting where subjects’ gaze into five boxes shown on a computer screen was recognized through Steady-State Visually Evoked Potential based BCI. The classification was performed using algorithms of Naive Bayes, Extreme Learning Machine, and Support Vector Machines. Three feature sets; Autoregressive, Hjorth, and Power Spectral Density, were extracted from EEG signals. As a result, using Autoregressive features, classifiers performed between 45.67% and 78.34%, whereas for Hjorth their classification performance was within 43.34-75.25%, and finally, by using Power Spectral Density their classification performance was between 57.36% and 83.42% Furthermore, classifier performances using Naive Bayes varied between 52.23% and 79.15% for Naive Bayes, 56.32-83.42% for Extreme Learning Machine, and 43.34-72.27% for Support Vector Machines by regarding classification algorithms. Among achieved accuracy performances, the best accuracy is 83.42%, achieved by the Power Spectral Density features and Extreme Learning Machine algorithm pair.