{"title":"Affective Music Player for Multiple Emotion Recognition Using Facial Expressions with SVM","authors":"Supriya L P, Rashmita Khilar","doi":"10.1109/I-SMAC52330.2021.9640706","DOIUrl":null,"url":null,"abstract":"Affective computing is a form of machinery that enables the machine to respond to a human stimulus in some way, usually associated with sophisticated mood or emotional indications. This emotion based music player project is a novel approach that helps the user play songs automatically based on the user's emotions. This understands user's facial emotions and plays the songs according to their emotions. Music as a major impact on the regular life of human beings and in innovative, progressive technologies. Generally the operator needs to do with the challenge of looking for songs manually navigate through the playlist to choose from. At this point it suggests an effective and precise model, which would produce a playlist constructed on the user's present emotional state and behavior. Existing strategies to mechanize the method of creating the playlist are computationally moderate, less solid and some of the time includes the utilization of additional hardware. Discourse is the foremost antiquated and ordinary way of communicating considerations, feelings, and temperament and it requires tall specialized, time, and taken a toll. This proposed framework is based on extricating facial expressions in real-time, as well as extricating sound highlights from tunes to decipher into a specific feeling that will naturally produce a playlist so that the fetched of handling is moderately low. The Emotions are recognized using Support Vector Machine (SVM) . The webcam captures the user's image. It then extracts the user's facial features from the captured image. The music will be played from the pre- defined files, depending on the emotion.","PeriodicalId":178783,"journal":{"name":"2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)","volume":"27 7","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/I-SMAC52330.2021.9640706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Affective computing is a form of machinery that enables the machine to respond to a human stimulus in some way, usually associated with sophisticated mood or emotional indications. This emotion based music player project is a novel approach that helps the user play songs automatically based on the user's emotions. This understands user's facial emotions and plays the songs according to their emotions. Music as a major impact on the regular life of human beings and in innovative, progressive technologies. Generally the operator needs to do with the challenge of looking for songs manually navigate through the playlist to choose from. At this point it suggests an effective and precise model, which would produce a playlist constructed on the user's present emotional state and behavior. Existing strategies to mechanize the method of creating the playlist are computationally moderate, less solid and some of the time includes the utilization of additional hardware. Discourse is the foremost antiquated and ordinary way of communicating considerations, feelings, and temperament and it requires tall specialized, time, and taken a toll. This proposed framework is based on extricating facial expressions in real-time, as well as extricating sound highlights from tunes to decipher into a specific feeling that will naturally produce a playlist so that the fetched of handling is moderately low. The Emotions are recognized using Support Vector Machine (SVM) . The webcam captures the user's image. It then extracts the user's facial features from the captured image. The music will be played from the pre- defined files, depending on the emotion.