Songyun Xie, Fangshi Zhu, K. Obermayer, P. Ritter, Linan Wang
{"title":"A spatial selective visual attention pattern recognition method based on joint short SSVEP","authors":"Songyun Xie, Fangshi Zhu, K. Obermayer, P. Ritter, Linan Wang","doi":"10.1109/IJCNN.2013.6706872","DOIUrl":null,"url":null,"abstract":"Spatial selective attention pattern recognition plays a significant role in specific people's (e.g.: pilot's) state monitoring. Steady-State Visual Evoked Potentials (SSVEP) were recorded from the scalp of 6 subjects who were cued to attend to a flickering sequence displayed in one visual field while ignoring a similar one with a different flickering rate in the opposite field. The SSVEP to either flickering stimulus was enhanced when attention was lead to the same direction rather than to the opposite direction. The most significant enlargement is generally located on the posterior scalp contralateral to the visual field of stimulation. This attention-caused amplitude enhancement of SSVEP can be used to measure the attention shifting. In this paper, we developed an algorithm to extract short SSVEP, selectively combine them to form a joint temporal spatial selective attention feature, and use Support Vector Machine (SVM) to classify different attention pattern joint features. By segmenting the long single trial SSVEP (12s) data into short pieces (1s), we are able to largely decrease the training time while still keeping a high recognition accuracy (>93%) for most subjects, which makes it possible to monitor spatial selective attention on time.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6706872","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Spatial selective attention pattern recognition plays a significant role in specific people's (e.g.: pilot's) state monitoring. Steady-State Visual Evoked Potentials (SSVEP) were recorded from the scalp of 6 subjects who were cued to attend to a flickering sequence displayed in one visual field while ignoring a similar one with a different flickering rate in the opposite field. The SSVEP to either flickering stimulus was enhanced when attention was lead to the same direction rather than to the opposite direction. The most significant enlargement is generally located on the posterior scalp contralateral to the visual field of stimulation. This attention-caused amplitude enhancement of SSVEP can be used to measure the attention shifting. In this paper, we developed an algorithm to extract short SSVEP, selectively combine them to form a joint temporal spatial selective attention feature, and use Support Vector Machine (SVM) to classify different attention pattern joint features. By segmenting the long single trial SSVEP (12s) data into short pieces (1s), we are able to largely decrease the training time while still keeping a high recognition accuracy (>93%) for most subjects, which makes it possible to monitor spatial selective attention on time.