Dharanaesh M, V. Pushpalatha, Yughendaran P, Janarthanan S, Dinesh A
{"title":"Video based Facial Emotion Recognition System using Deep Learning","authors":"Dharanaesh M, V. Pushpalatha, Yughendaran P, Janarthanan S, Dinesh A","doi":"10.1109/ICEARS56392.2023.10085245","DOIUrl":null,"url":null,"abstract":"Fatigue or drowsiness is a significant factor that contributes to the occurrence of terrible road accidents. Every day, the number of fatal injuries increases day by day. The paper introduces a novel experimental model that aims to reduce the frequency of accidents by detecting driver drowsiness while also recommending songs based on facial emotions. The existing models use more hardware than necessary, leading to more cost and also do not provide as much accuracy. The proposed system aims to enhance the overall experience by reducing both the computational time required to obtain results and the overall cost of the system. For that, this study has developed a real-time information processing system that captures the video from the car dash camera. Then, an object detection algorithm will be employed to extract multiple facial parts from each frame using a pre-trained deep learning model from image processing libraries like OpenCV. Then, there is MobileNetV2, a lightweight convolutional neural network model that performs transfer learning by freezing feature extraction layers and creating custom dense layers for facial emotion classification, with output labels of happiness, sadness, fear, anger, surprise, disgust, sleepiness, and neutral. The driver's face will be identified using directional analysis from multiple facial parts in a single frame to carry out drowsiness detection to avoid accidents. Then, according to the emotion predicted by multiple users, the application will fetch a playlist of songs from Spotify through a Spotify wrapper and recommend the songs by displaying them on the car's dash screen. Finally, the model will be optimized using various optimization techniques to run on low-latency embedded devices.","PeriodicalId":338611,"journal":{"name":"2023 Second International Conference on Electronics and Renewable Systems (ICEARS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Second International Conference on Electronics and Renewable Systems (ICEARS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEARS56392.2023.10085245","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Fatigue or drowsiness is a significant factor that contributes to the occurrence of terrible road accidents. Every day, the number of fatal injuries increases day by day. The paper introduces a novel experimental model that aims to reduce the frequency of accidents by detecting driver drowsiness while also recommending songs based on facial emotions. The existing models use more hardware than necessary, leading to more cost and also do not provide as much accuracy. The proposed system aims to enhance the overall experience by reducing both the computational time required to obtain results and the overall cost of the system. For that, this study has developed a real-time information processing system that captures the video from the car dash camera. Then, an object detection algorithm will be employed to extract multiple facial parts from each frame using a pre-trained deep learning model from image processing libraries like OpenCV. Then, there is MobileNetV2, a lightweight convolutional neural network model that performs transfer learning by freezing feature extraction layers and creating custom dense layers for facial emotion classification, with output labels of happiness, sadness, fear, anger, surprise, disgust, sleepiness, and neutral. The driver's face will be identified using directional analysis from multiple facial parts in a single frame to carry out drowsiness detection to avoid accidents. Then, according to the emotion predicted by multiple users, the application will fetch a playlist of songs from Spotify through a Spotify wrapper and recommend the songs by displaying them on the car's dash screen. Finally, the model will be optimized using various optimization techniques to run on low-latency embedded devices.