{"title":"Emotion Detection via Voice and Speech Recognition","authors":"Rohit Rastogi, Tushar Anand, Shubham Kumar Sharma, Sarthak Panwar","doi":"10.4018/ijcbpl.333473","DOIUrl":null,"url":null,"abstract":"Emotion detection from voice signals is needed for human-computer interaction (HCI), which is a difficult challenge. In the literature on speech emotion recognition, various well known speech analysis and classification methods have been used to extract emotions from signals. Deep learning strategies have recently been proposed as a workable alternative to conventional methods and discussed. Several recent studies have employed these methods to identify speech-based emotions. The review examines the databases used, the emotions collected, and the contributions to speech emotion recognition. The Speech Emotion Recognition Project was created by the research team. It recognizes human speech emotions. The research team developed the project using Python 3.6. RAVDEESS dataset was also used since it contained eight distinct emotions expressed by all speakers. The RAVDESS dataset, Python programming languages, and Pycharm as an IDE were all used by the author team.","PeriodicalId":38296,"journal":{"name":"International Journal of Cyber Behavior, Psychology and Learning","volume":"122 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Cyber Behavior, Psychology and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/ijcbpl.333473","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion detection from voice signals is needed for human-computer interaction (HCI), which is a difficult challenge. In the literature on speech emotion recognition, various well known speech analysis and classification methods have been used to extract emotions from signals. Deep learning strategies have recently been proposed as a workable alternative to conventional methods and discussed. Several recent studies have employed these methods to identify speech-based emotions. The review examines the databases used, the emotions collected, and the contributions to speech emotion recognition. The Speech Emotion Recognition Project was created by the research team. It recognizes human speech emotions. The research team developed the project using Python 3.6. RAVDEESS dataset was also used since it contained eight distinct emotions expressed by all speakers. The RAVDESS dataset, Python programming languages, and Pycharm as an IDE were all used by the author team.
期刊介绍:
The mission of the International Journal of Cyber Behavior, Psychology and Learning (IJCBPL) is to identify learners’ online behavior based on the theories in human psychology, define online education phenomena as explained by the social and cognitive learning theories and principles, and interpret the complexity of cyber learning. IJCBPL offers a multi-disciplinary approach that incorporates the findings from brain research, biology, psychology, human cognition, developmental theory, sociology, motivation theory, and social behavior. This journal welcomes both quantitative and qualitative studies using experimental design, as well as ethnographic methods to understand the dynamics of cyber learning. Impacting multiple areas of research and practices, including secondary and higher education, professional training, Web-based design and development, media learning, adolescent education, school and community, and social communication, IJCBPL targets school teachers, counselors, researchers, and online designers.