K. Srinivasa, Sriram Anupindi, R. Sharath, S. Chaitanya
{"title":"视频反应中捕捉到的面部表情分析","authors":"K. Srinivasa, Sriram Anupindi, R. Sharath, S. Chaitanya","doi":"10.1109/IACC.2017.0140","DOIUrl":null,"url":null,"abstract":"With the advent of technology, there has been a rapid increase in data acquisition. Among the several types of data gathered, interpreting multimedia data by a machine without human intervention is a challenge. Extracting meaningful content from videos will help provide better solutions in various domains. Banking on processing the videos as our rudimentary concept, this paper intends to detect the expressiveness of an individual. Many traditional approaches exist to address this situation, however Deep learning is used in this work to achieve the goal. LSTM (Long Short Term Memory) is our selected implementation construct. For training the network MIT Affectiva dataset has been chosen, which comprises of videos of individuals responding to Superbowl commercials. Extending this network, two front ends for testing the video sample are provided. One is a web page for uploading the videos and displaying the results and the other is an IoT device. This device records the video of an individual's response and sends it to the processing server. The responses of individuals towards a particular commercial are recorded and the system is tested on it. The results obtained are examined and the scope for various interpretations are shown. The expressiveness detected becomes a pivotal feedback for the makers of commercials, paving the way for their improvement.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Analysis of Facial Expressiveness Captured in Reaction to Videos\",\"authors\":\"K. Srinivasa, Sriram Anupindi, R. Sharath, S. Chaitanya\",\"doi\":\"10.1109/IACC.2017.0140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the advent of technology, there has been a rapid increase in data acquisition. Among the several types of data gathered, interpreting multimedia data by a machine without human intervention is a challenge. Extracting meaningful content from videos will help provide better solutions in various domains. Banking on processing the videos as our rudimentary concept, this paper intends to detect the expressiveness of an individual. Many traditional approaches exist to address this situation, however Deep learning is used in this work to achieve the goal. LSTM (Long Short Term Memory) is our selected implementation construct. For training the network MIT Affectiva dataset has been chosen, which comprises of videos of individuals responding to Superbowl commercials. Extending this network, two front ends for testing the video sample are provided. One is a web page for uploading the videos and displaying the results and the other is an IoT device. This device records the video of an individual's response and sends it to the processing server. The responses of individuals towards a particular commercial are recorded and the system is tested on it. The results obtained are examined and the scope for various interpretations are shown. The expressiveness detected becomes a pivotal feedback for the makers of commercials, paving the way for their improvement.\",\"PeriodicalId\":248433,\"journal\":{\"name\":\"2017 IEEE 7th International Advance Computing Conference (IACC)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 7th International Advance Computing Conference (IACC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IACC.2017.0140\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 7th International Advance Computing Conference (IACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IACC.2017.0140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis of Facial Expressiveness Captured in Reaction to Videos
With the advent of technology, there has been a rapid increase in data acquisition. Among the several types of data gathered, interpreting multimedia data by a machine without human intervention is a challenge. Extracting meaningful content from videos will help provide better solutions in various domains. Banking on processing the videos as our rudimentary concept, this paper intends to detect the expressiveness of an individual. Many traditional approaches exist to address this situation, however Deep learning is used in this work to achieve the goal. LSTM (Long Short Term Memory) is our selected implementation construct. For training the network MIT Affectiva dataset has been chosen, which comprises of videos of individuals responding to Superbowl commercials. Extending this network, two front ends for testing the video sample are provided. One is a web page for uploading the videos and displaying the results and the other is an IoT device. This device records the video of an individual's response and sends it to the processing server. The responses of individuals towards a particular commercial are recorded and the system is tested on it. The results obtained are examined and the scope for various interpretations are shown. The expressiveness detected becomes a pivotal feedback for the makers of commercials, paving the way for their improvement.