Abel Desoto, Ethan Santos, Francis Liri, K. Faller, Devin Heng, Joshua Dodd, K. George, Julia R. Drouin
{"title":"利用脑电数据和KNN模型预测音频训练学习结果","authors":"Abel Desoto, Ethan Santos, Francis Liri, K. Faller, Devin Heng, Joshua Dodd, K. George, Julia R. Drouin","doi":"10.1109/aiiot54504.2022.9817164","DOIUrl":null,"url":null,"abstract":"People are constantly surrounded by some form of sound, which can occasionally interfere with their daily tasks such as conversation. When sound interferes with daily activities, it becomes noise that is undesired sound. Depending on the surroundings, one may be subjected to varying levels of noise, resulting in hearing challenges especially for those with hearing disabilities. Researchers have tested how the brain interprets information and shown that the brain can be ‘primed’ to quickly tune hearing and effectively learn to understand sounds. This concept is used to propose a software-based training solution that utilizes EEG signals to identify whether or not a person with a hearing disability is learning. This can be applied for the training of those with disabilities and eliminate the need of a doctor to administer and make the process faster and simpler. An overall framework for the proposed system and outline of the essential components are presented. The research is extended by refining the testing and experiment methods, resolving some of the weaknesses of the research and performing similar studies with a larger participant pool. Furthermore, a machine learning algorithm, K-Nearest Neighbor (KNN), is applied to evaluate EEG data and predict a subject's understanding of distorted audio.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predicting Audio Training Learning Outcomes Using EEG Data and KNN Modeling\",\"authors\":\"Abel Desoto, Ethan Santos, Francis Liri, K. Faller, Devin Heng, Joshua Dodd, K. George, Julia R. Drouin\",\"doi\":\"10.1109/aiiot54504.2022.9817164\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"People are constantly surrounded by some form of sound, which can occasionally interfere with their daily tasks such as conversation. When sound interferes with daily activities, it becomes noise that is undesired sound. Depending on the surroundings, one may be subjected to varying levels of noise, resulting in hearing challenges especially for those with hearing disabilities. Researchers have tested how the brain interprets information and shown that the brain can be ‘primed’ to quickly tune hearing and effectively learn to understand sounds. This concept is used to propose a software-based training solution that utilizes EEG signals to identify whether or not a person with a hearing disability is learning. This can be applied for the training of those with disabilities and eliminate the need of a doctor to administer and make the process faster and simpler. An overall framework for the proposed system and outline of the essential components are presented. The research is extended by refining the testing and experiment methods, resolving some of the weaknesses of the research and performing similar studies with a larger participant pool. Furthermore, a machine learning algorithm, K-Nearest Neighbor (KNN), is applied to evaluate EEG data and predict a subject's understanding of distorted audio.\",\"PeriodicalId\":409264,\"journal\":{\"name\":\"2022 IEEE World AI IoT Congress (AIIoT)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE World AI IoT Congress (AIIoT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/aiiot54504.2022.9817164\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE World AI IoT Congress (AIIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/aiiot54504.2022.9817164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Predicting Audio Training Learning Outcomes Using EEG Data and KNN Modeling
People are constantly surrounded by some form of sound, which can occasionally interfere with their daily tasks such as conversation. When sound interferes with daily activities, it becomes noise that is undesired sound. Depending on the surroundings, one may be subjected to varying levels of noise, resulting in hearing challenges especially for those with hearing disabilities. Researchers have tested how the brain interprets information and shown that the brain can be ‘primed’ to quickly tune hearing and effectively learn to understand sounds. This concept is used to propose a software-based training solution that utilizes EEG signals to identify whether or not a person with a hearing disability is learning. This can be applied for the training of those with disabilities and eliminate the need of a doctor to administer and make the process faster and simpler. An overall framework for the proposed system and outline of the essential components are presented. The research is extended by refining the testing and experiment methods, resolving some of the weaknesses of the research and performing similar studies with a larger participant pool. Furthermore, a machine learning algorithm, K-Nearest Neighbor (KNN), is applied to evaluate EEG data and predict a subject's understanding of distorted audio.