S. G, Evangelin Blessy A, Jeya Aravinth S, Vignesh Prabhu M, VijayaSarathy R
{"title":"Recommendation of Music Based on Facial Emotion using Machine Learning Technique","authors":"S. G, Evangelin Blessy A, Jeya Aravinth S, Vignesh Prabhu M, VijayaSarathy R","doi":"10.53759/acims/978-9914-9946-9-8_16","DOIUrl":null,"url":null,"abstract":"Music plays a vital role in human life, and it is a valid therapy to potentially reduce depression, anxiety, as well as to improve mood, self-esteem, and quality of life. Music has the power to change human emotion as expressed through facial expression. It’s a difficult task to recommend music based on emotion. The existing system on emotion recognition and music recommendation is focused on depression and mental health analysis. Hence a model is proposed to recommend music based on recognition of face expression to improve or change the emotion. Face emotion recognition (FER) is implemented using YoloV5 algorithm. The output of FER is a type of emotion classified as happy, anger, sad, and neutral which is the input to music recommendation system. A Music player is created to keep track of the user’s favorite based on the emotion. If the user is new to the system, then generalized music will be suggested. The aim of the paper is to recommend music to the user according to their emotion to further improve it.","PeriodicalId":261928,"journal":{"name":"Advances in Computational Intelligence in Materials Science","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Computational Intelligence in Materials Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.53759/acims/978-9914-9946-9-8_16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Music plays a vital role in human life, and it is a valid therapy to potentially reduce depression, anxiety, as well as to improve mood, self-esteem, and quality of life. Music has the power to change human emotion as expressed through facial expression. It’s a difficult task to recommend music based on emotion. The existing system on emotion recognition and music recommendation is focused on depression and mental health analysis. Hence a model is proposed to recommend music based on recognition of face expression to improve or change the emotion. Face emotion recognition (FER) is implemented using YoloV5 algorithm. The output of FER is a type of emotion classified as happy, anger, sad, and neutral which is the input to music recommendation system. A Music player is created to keep track of the user’s favorite based on the emotion. If the user is new to the system, then generalized music will be suggested. The aim of the paper is to recommend music to the user according to their emotion to further improve it.