{"title":"Multimodel emotion analysis in response to multimedia","authors":"Wei-Long Zheng, Jia-Yi Zhu, Bao-Liang Lu","doi":"10.1109/ICMEW.2014.6890622","DOIUrl":null,"url":null,"abstract":"In this demo paper, we designed a novel framework combining EEG and eye tracking signals to analyze users' emotional activities in response to multimedia. To realize the proposed framework, we extracted efficient features of EEG and eye tracking signals and used support vector machine as classifier. We combined multimodel features using feature-level fusion and decision-level fusion to classify three emotional categories (positive, neutral and negative), which can achieve the average accuracies of 75.62% and 74.92%, respectively. We investigated the brain activities that are associated with emotions. Our experimental results indicated there exist stable common patterns and activated areas of the brain associated with positive and negative emotions. In the demo, we also showed the trajectory of emotion changes in response to multimedia.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW.2014.6890622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this demo paper, we designed a novel framework combining EEG and eye tracking signals to analyze users' emotional activities in response to multimedia. To realize the proposed framework, we extracted efficient features of EEG and eye tracking signals and used support vector machine as classifier. We combined multimodel features using feature-level fusion and decision-level fusion to classify three emotional categories (positive, neutral and negative), which can achieve the average accuracies of 75.62% and 74.92%, respectively. We investigated the brain activities that are associated with emotions. Our experimental results indicated there exist stable common patterns and activated areas of the brain associated with positive and negative emotions. In the demo, we also showed the trajectory of emotion changes in response to multimedia.