{"title":"EmotionNet: ResNeXt Inspired CNN Architecture for Emotion Analysis on Raspberry Pi","authors":"Ved Gupta, Vinayak Gajendra Panchal, Vinamra Singh, Deepika Bansal, Peeyush Garg","doi":"10.1109/RTEICT52294.2021.9573569","DOIUrl":null,"url":null,"abstract":"Facial Expression Recognition has seen remarkable advancements in scientific research and development in recent years. Furthermore, these advancements have enabled to efficiently extract facial features for psychological analysis, improvement of consumer experience, and research on human-computer interaction. Facial expressions play a vital role in human existence because they are a significant part of non-verbal communication. However, existing FER systems require large computational capacity, rendering them unusable for use in either small-scale systems or large-scale deployments. Presented work aims to find a solution to aforesaid problem by developing an FER system capable of running on affordable and modular devices specifically Raspberry pi. In presented work, the FER system is developed that identifies and extracts the face using OpenCV, then classifies the expressions using a ResNeXt inspired CNN architecture named EmotionNet. The classifier network was trained on the FERPlus dataset, and it achieved 70.22% micro-accuracy on the test set. An interactive GUI platform was designed using KivyMD to control the overall system. The classification of facial expression takes place into the following five categories: neutral, happy, sad, angry, and surprised. The detected emotion is used to generate a descriptive report consisting of qualitative material, in order to apprise the user of their state of emotion, and alter it in case of negative emotions. The developed system running on Raspberry Pi provides a high throughput rate of 1.33 frames per second.","PeriodicalId":191410,"journal":{"name":"2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RTEICT52294.2021.9573569","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Facial Expression Recognition has seen remarkable advancements in scientific research and development in recent years. Furthermore, these advancements have enabled to efficiently extract facial features for psychological analysis, improvement of consumer experience, and research on human-computer interaction. Facial expressions play a vital role in human existence because they are a significant part of non-verbal communication. However, existing FER systems require large computational capacity, rendering them unusable for use in either small-scale systems or large-scale deployments. Presented work aims to find a solution to aforesaid problem by developing an FER system capable of running on affordable and modular devices specifically Raspberry pi. In presented work, the FER system is developed that identifies and extracts the face using OpenCV, then classifies the expressions using a ResNeXt inspired CNN architecture named EmotionNet. The classifier network was trained on the FERPlus dataset, and it achieved 70.22% micro-accuracy on the test set. An interactive GUI platform was designed using KivyMD to control the overall system. The classification of facial expression takes place into the following five categories: neutral, happy, sad, angry, and surprised. The detected emotion is used to generate a descriptive report consisting of qualitative material, in order to apprise the user of their state of emotion, and alter it in case of negative emotions. The developed system running on Raspberry Pi provides a high throughput rate of 1.33 frames per second.