Yara Rizk, Maya H. Safieddine, David Matchoulian, M. Awad
{"title":"Face2Mus: A facial emotion based Internet radio tuner application","authors":"Yara Rizk, Maya H. Safieddine, David Matchoulian, M. Awad","doi":"10.1109/MELCON.2014.6820542","DOIUrl":null,"url":null,"abstract":"We propose in this paper, Face2Mus, a mobile application that streams music from online radio stations after identifying the user's emotions, without interfering with the device's usage. Face2Mus streams songs from online radio stations and classifies them into emotion classes based on audio features using an energy aware support vector machine (SVM) classifier. In parallel, the application captures images of the user's face using the smartphone or tablet's camera and classifying them into one of three emotions, using a multiclass SVM trained on facial geometric distances and wrinkles. The audio classification based on regular SVM achieved an overall testing accuracy of 99.83% when trained on the Million Song Dataset subset, whereas the energy aware SVM exhibited an average degradation of 1.93% when a 59% reduction in the number of support vectors (SV) is enforced. The image classification achieved an overall testing accuracy of 87.5% using leave one out validation on a home-made image database. The overall application requires 272KB of storage space, 12 to 24 MB of RAM and a startup time of approximately 2 minutes. Aside from its entertainment potentials, Face2Mus has possible usage in music therapy for improving people's well-being and emotional status.","PeriodicalId":103316,"journal":{"name":"MELECON 2014 - 2014 17th IEEE Mediterranean Electrotechnical Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MELECON 2014 - 2014 17th IEEE Mediterranean Electrotechnical Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MELCON.2014.6820542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We propose in this paper, Face2Mus, a mobile application that streams music from online radio stations after identifying the user's emotions, without interfering with the device's usage. Face2Mus streams songs from online radio stations and classifies them into emotion classes based on audio features using an energy aware support vector machine (SVM) classifier. In parallel, the application captures images of the user's face using the smartphone or tablet's camera and classifying them into one of three emotions, using a multiclass SVM trained on facial geometric distances and wrinkles. The audio classification based on regular SVM achieved an overall testing accuracy of 99.83% when trained on the Million Song Dataset subset, whereas the energy aware SVM exhibited an average degradation of 1.93% when a 59% reduction in the number of support vectors (SV) is enforced. The image classification achieved an overall testing accuracy of 87.5% using leave one out validation on a home-made image database. The overall application requires 272KB of storage space, 12 to 24 MB of RAM and a startup time of approximately 2 minutes. Aside from its entertainment potentials, Face2Mus has possible usage in music therapy for improving people's well-being and emotional status.