{"title":"基于音符意象的脑机接口协议","authors":"Anna Montevilla, Guillermo Sahonero-Alvarez","doi":"10.1109/LA-CCI48322.2021.9769845","DOIUrl":null,"url":null,"abstract":"The application of Brain-Computer Interfaces is expected to become a matter of daily life. For this purpose, several efforts are being developed to ensure that users can employ this technology without difficulties. A large amount of studies consider motor imagery, which implies the usage of sensorimotor rhythms produced when imaging motor actions. However, previous works have shown that from a sample of population, a portion of users (15~30%) is unable to efficiently control a BCI based on such paradigm. The roots of this issue have been partially located to different factors related to the training protocol that users follow to learn how to use the system. Thus, in order to extend the applicability of BCIs, training procedures must consider different approaches. Musical imagery is another mental task that may be used to control BCIs and requires users to have music related thoughts or imagine specific notes and even songs. In this work, we propose a protocol to explore the properties of Musical Imagery based training procedures. For this, we developed both offline and online experiments, where the last one consisted of 4 sessions. The data-processing steps include filtering the data using a FIR filter to later extract features using PCA, and classify such features with a multi-class SVM. Our results show that the offline classification is comparable to motor imagery based BCIs as the accuracy is between 80% to 95%. Moreover, we found that the online setup results point to up to 64% of accuracy for the third session with feedback.","PeriodicalId":431041,"journal":{"name":"2021 IEEE Latin American Conference on Computational Intelligence (LA-CCI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A protocol for Brain-Computer Interfaces based on Musical Notes Imagery\",\"authors\":\"Anna Montevilla, Guillermo Sahonero-Alvarez\",\"doi\":\"10.1109/LA-CCI48322.2021.9769845\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The application of Brain-Computer Interfaces is expected to become a matter of daily life. For this purpose, several efforts are being developed to ensure that users can employ this technology without difficulties. A large amount of studies consider motor imagery, which implies the usage of sensorimotor rhythms produced when imaging motor actions. However, previous works have shown that from a sample of population, a portion of users (15~30%) is unable to efficiently control a BCI based on such paradigm. The roots of this issue have been partially located to different factors related to the training protocol that users follow to learn how to use the system. Thus, in order to extend the applicability of BCIs, training procedures must consider different approaches. Musical imagery is another mental task that may be used to control BCIs and requires users to have music related thoughts or imagine specific notes and even songs. In this work, we propose a protocol to explore the properties of Musical Imagery based training procedures. For this, we developed both offline and online experiments, where the last one consisted of 4 sessions. The data-processing steps include filtering the data using a FIR filter to later extract features using PCA, and classify such features with a multi-class SVM. Our results show that the offline classification is comparable to motor imagery based BCIs as the accuracy is between 80% to 95%. Moreover, we found that the online setup results point to up to 64% of accuracy for the third session with feedback.\",\"PeriodicalId\":431041,\"journal\":{\"name\":\"2021 IEEE Latin American Conference on Computational Intelligence (LA-CCI)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Latin American Conference on Computational Intelligence (LA-CCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/LA-CCI48322.2021.9769845\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Latin American Conference on Computational Intelligence (LA-CCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LA-CCI48322.2021.9769845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A protocol for Brain-Computer Interfaces based on Musical Notes Imagery
The application of Brain-Computer Interfaces is expected to become a matter of daily life. For this purpose, several efforts are being developed to ensure that users can employ this technology without difficulties. A large amount of studies consider motor imagery, which implies the usage of sensorimotor rhythms produced when imaging motor actions. However, previous works have shown that from a sample of population, a portion of users (15~30%) is unable to efficiently control a BCI based on such paradigm. The roots of this issue have been partially located to different factors related to the training protocol that users follow to learn how to use the system. Thus, in order to extend the applicability of BCIs, training procedures must consider different approaches. Musical imagery is another mental task that may be used to control BCIs and requires users to have music related thoughts or imagine specific notes and even songs. In this work, we propose a protocol to explore the properties of Musical Imagery based training procedures. For this, we developed both offline and online experiments, where the last one consisted of 4 sessions. The data-processing steps include filtering the data using a FIR filter to later extract features using PCA, and classify such features with a multi-class SVM. Our results show that the offline classification is comparable to motor imagery based BCIs as the accuracy is between 80% to 95%. Moreover, we found that the online setup results point to up to 64% of accuracy for the third session with feedback.