Moyank Giri, Muskan Bansal, Aditya Ramesh, D. Satvik, U. D
{"title":"Enhancing Safety in Vehicles using Emotion Recognition with Artificial Intelligence","authors":"Moyank Giri, Muskan Bansal, Aditya Ramesh, D. Satvik, U. D","doi":"10.1109/I2CT57861.2023.10126274","DOIUrl":null,"url":null,"abstract":"Safety is the singular most important aspect to improve when it comes to automobiles. The most efficient way to improve road safety would be to consider the driver factor since it has been found that over 90% of all road accidents are due to driver's fault. The emotion of the driver plays a crucial role in ensuring safety while driving. While external parameters can help improve safety, these external parameters are not of much help unless the driver is emotionally stable. Thus, detecting the driver's emotion and enhancing it could significantly improve road safety. Hence, this paper involves identifying and improving the emotional stability of drivers in order to drastically raise automobile safety. Artificial Intelligence (AI) technologies have helped automate and improve many aspects of driving, and have created an environment with a comfortable passenger experience. This paper intends to use some of these AI technologies to detect the emotional state of the driver as Happy, Sad, Angry, Surprised, Fear, Disgust or Neutral by considering both speech and facial expressions, and then generate alerts based on the detected emotion which includes audio and visual alerts for safety, and finally improve the driver's emotional state using appropriate suggestions provided by a music recommendation system. This study makes use of deep learning models (i.e., CNN Models) for automatic emotion detection from audio and video where the validation accuracy obtained for video emotion detection and audio emotion detection is 83% and 78% respectively. The paper also details the use of a developed algorithm for combination of the emotions from Audio and Video. Furthermore, this study uses the Spotify API for the music recommendation system.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/I2CT57861.2023.10126274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Safety is the singular most important aspect to improve when it comes to automobiles. The most efficient way to improve road safety would be to consider the driver factor since it has been found that over 90% of all road accidents are due to driver's fault. The emotion of the driver plays a crucial role in ensuring safety while driving. While external parameters can help improve safety, these external parameters are not of much help unless the driver is emotionally stable. Thus, detecting the driver's emotion and enhancing it could significantly improve road safety. Hence, this paper involves identifying and improving the emotional stability of drivers in order to drastically raise automobile safety. Artificial Intelligence (AI) technologies have helped automate and improve many aspects of driving, and have created an environment with a comfortable passenger experience. This paper intends to use some of these AI technologies to detect the emotional state of the driver as Happy, Sad, Angry, Surprised, Fear, Disgust or Neutral by considering both speech and facial expressions, and then generate alerts based on the detected emotion which includes audio and visual alerts for safety, and finally improve the driver's emotional state using appropriate suggestions provided by a music recommendation system. This study makes use of deep learning models (i.e., CNN Models) for automatic emotion detection from audio and video where the validation accuracy obtained for video emotion detection and audio emotion detection is 83% and 78% respectively. The paper also details the use of a developed algorithm for combination of the emotions from Audio and Video. Furthermore, this study uses the Spotify API for the music recommendation system.