{"title":"Poster: Imitation Learning for Hearing Loss Detection with Cortical Speech-Evoked Responses","authors":"Cicelia Siu, Beiyu Lin","doi":"10.1109/SEC54971.2022.00045","DOIUrl":null,"url":null,"abstract":"Electroencephalograph (EEG) data is used to diagnose brain conditions, such as epilepsy. The brain gives off electrical activity in voltages at different parts of the cerebral cortex. When electroen-cephalograph (EEG) data is taken, analyzing the data can show which part of the brain has activity and how much activity. However, currently studies only consider spatial and temporal parts of brain activities separately. In this study, we propose to fuse spatio-temporal information together via imitation learning to better understand brain activities, especially cortical speech-evoked responses. We will validate our methods via a real-life dataset to understand the patterns and distinguish hearing-impaired individuals from normal-hearing individuals based on brain activities (i.e., cortical speech-evoked responses). To the best of our knowledge, we are the first group to use imitation learning for brain activity study, especially the cortical speech-evoked responses. Our methods have the potential to be integrated as a sustainable service and can be leveraged for future hearing research.","PeriodicalId":364062,"journal":{"name":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC54971.2022.00045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Electroencephalograph (EEG) data is used to diagnose brain conditions, such as epilepsy. The brain gives off electrical activity in voltages at different parts of the cerebral cortex. When electroen-cephalograph (EEG) data is taken, analyzing the data can show which part of the brain has activity and how much activity. However, currently studies only consider spatial and temporal parts of brain activities separately. In this study, we propose to fuse spatio-temporal information together via imitation learning to better understand brain activities, especially cortical speech-evoked responses. We will validate our methods via a real-life dataset to understand the patterns and distinguish hearing-impaired individuals from normal-hearing individuals based on brain activities (i.e., cortical speech-evoked responses). To the best of our knowledge, we are the first group to use imitation learning for brain activity study, especially the cortical speech-evoked responses. Our methods have the potential to be integrated as a sustainable service and can be leveraged for future hearing research.