Video based Facial Emotion Recognition System using Deep Learning

Dharanaesh M, V. Pushpalatha, Yughendaran P, Janarthanan S, Dinesh A
{"title":"Video based Facial Emotion Recognition System using Deep Learning","authors":"Dharanaesh M, V. Pushpalatha, Yughendaran P, Janarthanan S, Dinesh A","doi":"10.1109/ICEARS56392.2023.10085245","DOIUrl":null,"url":null,"abstract":"Fatigue or drowsiness is a significant factor that contributes to the occurrence of terrible road accidents. Every day, the number of fatal injuries increases day by day. The paper introduces a novel experimental model that aims to reduce the frequency of accidents by detecting driver drowsiness while also recommending songs based on facial emotions. The existing models use more hardware than necessary, leading to more cost and also do not provide as much accuracy. The proposed system aims to enhance the overall experience by reducing both the computational time required to obtain results and the overall cost of the system. For that, this study has developed a real-time information processing system that captures the video from the car dash camera. Then, an object detection algorithm will be employed to extract multiple facial parts from each frame using a pre-trained deep learning model from image processing libraries like OpenCV. Then, there is MobileNetV2, a lightweight convolutional neural network model that performs transfer learning by freezing feature extraction layers and creating custom dense layers for facial emotion classification, with output labels of happiness, sadness, fear, anger, surprise, disgust, sleepiness, and neutral. The driver's face will be identified using directional analysis from multiple facial parts in a single frame to carry out drowsiness detection to avoid accidents. Then, according to the emotion predicted by multiple users, the application will fetch a playlist of songs from Spotify through a Spotify wrapper and recommend the songs by displaying them on the car's dash screen. Finally, the model will be optimized using various optimization techniques to run on low-latency embedded devices.","PeriodicalId":338611,"journal":{"name":"2023 Second International Conference on Electronics and Renewable Systems (ICEARS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Second International Conference on Electronics and Renewable Systems (ICEARS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEARS56392.2023.10085245","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Fatigue or drowsiness is a significant factor that contributes to the occurrence of terrible road accidents. Every day, the number of fatal injuries increases day by day. The paper introduces a novel experimental model that aims to reduce the frequency of accidents by detecting driver drowsiness while also recommending songs based on facial emotions. The existing models use more hardware than necessary, leading to more cost and also do not provide as much accuracy. The proposed system aims to enhance the overall experience by reducing both the computational time required to obtain results and the overall cost of the system. For that, this study has developed a real-time information processing system that captures the video from the car dash camera. Then, an object detection algorithm will be employed to extract multiple facial parts from each frame using a pre-trained deep learning model from image processing libraries like OpenCV. Then, there is MobileNetV2, a lightweight convolutional neural network model that performs transfer learning by freezing feature extraction layers and creating custom dense layers for facial emotion classification, with output labels of happiness, sadness, fear, anger, surprise, disgust, sleepiness, and neutral. The driver's face will be identified using directional analysis from multiple facial parts in a single frame to carry out drowsiness detection to avoid accidents. Then, according to the emotion predicted by multiple users, the application will fetch a playlist of songs from Spotify through a Spotify wrapper and recommend the songs by displaying them on the car's dash screen. Finally, the model will be optimized using various optimization techniques to run on low-latency embedded devices.
基于视频的深度学习面部情感识别系统
疲劳或困倦是导致可怕的交通事故发生的一个重要因素。每天,致命伤害的数量都在与日俱增。本文介绍了一种新的实验模型,旨在通过检测驾驶员的睡意来降低事故发生的频率,同时根据面部情绪推荐歌曲。现有的模型使用了比必要的更多的硬件,导致成本更高,而且也不能提供足够的准确性。该系统旨在通过减少获得结果所需的计算时间和系统的总体成本来提高整体体验。为此,本研究开发了一种实时信息处理系统,可以捕获汽车行车记录仪的视频。然后,使用OpenCV等图像处理库中的预训练深度学习模型,使用目标检测算法从每帧中提取多个面部部分。然后是MobileNetV2,这是一个轻量级的卷积神经网络模型,它通过冻结特征提取层和创建用于面部情绪分类的自定义密集层来执行迁移学习,输出标签为快乐、悲伤、恐惧、愤怒、惊讶、厌恶、困倦和中性。驾驶员的面部将通过对单个框架中多个面部部位的方向分析来识别,进行困倦检测,以避免事故发生。然后,根据多个用户预测的情绪,该应用程序将通过Spotify包装器从Spotify获取歌曲播放列表,并通过在汽车仪表板上显示这些歌曲来推荐这些歌曲。最后,将使用各种优化技术对模型进行优化,以便在低延迟嵌入式设备上运行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信