{"title":"Pilot Study on Objective Evaluation of Human Auditory Ability using Hybrid EEG and FNIRS Acquisition","authors":"Zihao Xu, G. Ni, Siyang Han, Q. Zheng, Dong Ming","doi":"10.1109/CIVEMSA45640.2019.9071629","DOIUrl":null,"url":null,"abstract":"The multimodal brain-computer interface is very useful for identifying brain states of the human. It can leverage its strengths to improve objective evaluation capabilities, such as increased spatial resolution and temporal resolution. In this paper, we use EEG signals and functional near-infrared signals (fNIRS) to describe responses of the brain region to sounds. Subjects were first trained in simple adaptation and then were asked to make judgments on five types of sound. During the test, EEG and hemodynamic information of the subjects were collected synchronously in real time. Finally, collected information was preprocessed, feature extraction for example, and analyzed. Through the analysis of the results of this paradigm, it can be concluded that the subjects' brain responses are different to different sound types, which means the EEG information and the hemodynamic parameters could be used as objective evaluation indexes of the subject's auditory ability. We hope to conduct many experiments in the future, verify our conjectures and be able to propose an objective evaluation system for auditory assessment, especially for those with impaired hearing.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIVEMSA45640.2019.9071629","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The multimodal brain-computer interface is very useful for identifying brain states of the human. It can leverage its strengths to improve objective evaluation capabilities, such as increased spatial resolution and temporal resolution. In this paper, we use EEG signals and functional near-infrared signals (fNIRS) to describe responses of the brain region to sounds. Subjects were first trained in simple adaptation and then were asked to make judgments on five types of sound. During the test, EEG and hemodynamic information of the subjects were collected synchronously in real time. Finally, collected information was preprocessed, feature extraction for example, and analyzed. Through the analysis of the results of this paradigm, it can be concluded that the subjects' brain responses are different to different sound types, which means the EEG information and the hemodynamic parameters could be used as objective evaluation indexes of the subject's auditory ability. We hope to conduct many experiments in the future, verify our conjectures and be able to propose an objective evaluation system for auditory assessment, especially for those with impaired hearing.