{"title":"通过有效的特征估计和分类模型识别人类情感","authors":"S. Pangaonkar, R. Gunjan, Virendra Shete","doi":"10.1109/CCGE50943.2021.9776405","DOIUrl":null,"url":null,"abstract":"Voice Emotion Recognition (VER) is a dynamic and has implications on a wide range of research areas. Use of a computer for voice emotion recognition is a way to study the voice signal of a speaker, as well as is a process that is altered by inner emotions. Human Machine Interface (HMI) is very vital and opted to implement this effectively and an innovative way. To develop new recognition methods, this research paper evaluates the basic emotions of human. Accurate detection of emotional states can be further used as a machine learning database for interdisciplinary experiments. The proposed system is an algorithmic method that first extracts the audio signal from the microphone, preprocesses it, and then evaluates the parameters based on various characteristics. The model is trained through the Mel Frequency Cepstral Coefficient (MFCC) and PRAAT (Speech Analysis in Phonetics) coefficients. By creating a feature map using these, Convolutional Neural Networks (CNN) effectively learn and classify the attributes of perceived signals of basic emotions such as sadness, surprise, happiness, anger, fear, neutral and disgust. The proposed method provides good recognition rate.","PeriodicalId":130452,"journal":{"name":"2021 International Conference on Computing, Communication and Green Engineering (CCGE)","volume":"312 10","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recognition of Human Emotion through effective estimations of Features and Classification Model\",\"authors\":\"S. Pangaonkar, R. Gunjan, Virendra Shete\",\"doi\":\"10.1109/CCGE50943.2021.9776405\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Voice Emotion Recognition (VER) is a dynamic and has implications on a wide range of research areas. Use of a computer for voice emotion recognition is a way to study the voice signal of a speaker, as well as is a process that is altered by inner emotions. Human Machine Interface (HMI) is very vital and opted to implement this effectively and an innovative way. To develop new recognition methods, this research paper evaluates the basic emotions of human. Accurate detection of emotional states can be further used as a machine learning database for interdisciplinary experiments. The proposed system is an algorithmic method that first extracts the audio signal from the microphone, preprocesses it, and then evaluates the parameters based on various characteristics. The model is trained through the Mel Frequency Cepstral Coefficient (MFCC) and PRAAT (Speech Analysis in Phonetics) coefficients. By creating a feature map using these, Convolutional Neural Networks (CNN) effectively learn and classify the attributes of perceived signals of basic emotions such as sadness, surprise, happiness, anger, fear, neutral and disgust. The proposed method provides good recognition rate.\",\"PeriodicalId\":130452,\"journal\":{\"name\":\"2021 International Conference on Computing, Communication and Green Engineering (CCGE)\",\"volume\":\"312 10\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Computing, Communication and Green Engineering (CCGE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGE50943.2021.9776405\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computing, Communication and Green Engineering (CCGE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGE50943.2021.9776405","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recognition of Human Emotion through effective estimations of Features and Classification Model
Voice Emotion Recognition (VER) is a dynamic and has implications on a wide range of research areas. Use of a computer for voice emotion recognition is a way to study the voice signal of a speaker, as well as is a process that is altered by inner emotions. Human Machine Interface (HMI) is very vital and opted to implement this effectively and an innovative way. To develop new recognition methods, this research paper evaluates the basic emotions of human. Accurate detection of emotional states can be further used as a machine learning database for interdisciplinary experiments. The proposed system is an algorithmic method that first extracts the audio signal from the microphone, preprocesses it, and then evaluates the parameters based on various characteristics. The model is trained through the Mel Frequency Cepstral Coefficient (MFCC) and PRAAT (Speech Analysis in Phonetics) coefficients. By creating a feature map using these, Convolutional Neural Networks (CNN) effectively learn and classify the attributes of perceived signals of basic emotions such as sadness, surprise, happiness, anger, fear, neutral and disgust. The proposed method provides good recognition rate.