{"title":"使用机器学习技术的人类情感识别模型","authors":"Aftab Alam, S. Urooj, A. Q. Ansari","doi":"10.1109/REEDCON57544.2023.10151406","DOIUrl":null,"url":null,"abstract":"Researchers have always been curious if a computer can detect human emotions precisely and accurately. Many research publications have been reported on human-machine interaction systems. The emotion classifiers using machine learning techniques are developed using the feature dataset extracted from physiological and non-physiological parameters. Emotion recognition can be done either by using facial, speech or audio-visual data paths or using physiological signals like ECG, EEG, EMG, GSR and Respiration signals. Many have explored facial recognition techniques for emotion recognition but facial expressions can be masked. A sad person can pretend to have a smiling face and vice-versa. Physiological signals like ECG, EEG, GSR and respiration signals are non-maskable due to their involuntary source of generation. There are many datasets available publicly for researchers to use and develop an efficient emotion classifier system. In this work, the publicly available datasets of EEG, ECG and GSR recorded while watching emotional video are utilized to develop emotion classifiers using machine learning techniques. Here three physiological feature datasets named LUMED-2 (EEG+ GSR), SWELL (HRV), and YAAD (ECG+ GSR) are used to train models and classify emotions. The deep learning classifiers used are Random Forest, SVM, KNN, and/or Decision Tree. The maximum average classification accuracy achieved is close to 100% at least for one classifier in each dataset. It is observed that physiological signals like EEG, ECG, and GSR possess differentiable emotional features which can be used to detect the emotional state of a person precisely using the trained machine learning models.","PeriodicalId":429116,"journal":{"name":"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Human Emotion Recognition Models Using Machine Learning Techniques\",\"authors\":\"Aftab Alam, S. Urooj, A. Q. Ansari\",\"doi\":\"10.1109/REEDCON57544.2023.10151406\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Researchers have always been curious if a computer can detect human emotions precisely and accurately. Many research publications have been reported on human-machine interaction systems. The emotion classifiers using machine learning techniques are developed using the feature dataset extracted from physiological and non-physiological parameters. Emotion recognition can be done either by using facial, speech or audio-visual data paths or using physiological signals like ECG, EEG, EMG, GSR and Respiration signals. Many have explored facial recognition techniques for emotion recognition but facial expressions can be masked. A sad person can pretend to have a smiling face and vice-versa. Physiological signals like ECG, EEG, GSR and respiration signals are non-maskable due to their involuntary source of generation. There are many datasets available publicly for researchers to use and develop an efficient emotion classifier system. In this work, the publicly available datasets of EEG, ECG and GSR recorded while watching emotional video are utilized to develop emotion classifiers using machine learning techniques. Here three physiological feature datasets named LUMED-2 (EEG+ GSR), SWELL (HRV), and YAAD (ECG+ GSR) are used to train models and classify emotions. The deep learning classifiers used are Random Forest, SVM, KNN, and/or Decision Tree. The maximum average classification accuracy achieved is close to 100% at least for one classifier in each dataset. It is observed that physiological signals like EEG, ECG, and GSR possess differentiable emotional features which can be used to detect the emotional state of a person precisely using the trained machine learning models.\",\"PeriodicalId\":429116,\"journal\":{\"name\":\"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/REEDCON57544.2023.10151406\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/REEDCON57544.2023.10151406","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human Emotion Recognition Models Using Machine Learning Techniques
Researchers have always been curious if a computer can detect human emotions precisely and accurately. Many research publications have been reported on human-machine interaction systems. The emotion classifiers using machine learning techniques are developed using the feature dataset extracted from physiological and non-physiological parameters. Emotion recognition can be done either by using facial, speech or audio-visual data paths or using physiological signals like ECG, EEG, EMG, GSR and Respiration signals. Many have explored facial recognition techniques for emotion recognition but facial expressions can be masked. A sad person can pretend to have a smiling face and vice-versa. Physiological signals like ECG, EEG, GSR and respiration signals are non-maskable due to their involuntary source of generation. There are many datasets available publicly for researchers to use and develop an efficient emotion classifier system. In this work, the publicly available datasets of EEG, ECG and GSR recorded while watching emotional video are utilized to develop emotion classifiers using machine learning techniques. Here three physiological feature datasets named LUMED-2 (EEG+ GSR), SWELL (HRV), and YAAD (ECG+ GSR) are used to train models and classify emotions. The deep learning classifiers used are Random Forest, SVM, KNN, and/or Decision Tree. The maximum average classification accuracy achieved is close to 100% at least for one classifier in each dataset. It is observed that physiological signals like EEG, ECG, and GSR possess differentiable emotional features which can be used to detect the emotional state of a person precisely using the trained machine learning models.