Swarubini P J;Thomas M. Deserno;Nagarajan Ganapathy
{"title":"基于多模态融合的智能摄像头非接触式驾驶员应力状态监测","authors":"Swarubini P J;Thomas M. Deserno;Nagarajan Ganapathy","doi":"10.1109/LSENS.2025.3595917","DOIUrl":null,"url":null,"abstract":"Driver stress involves complex psychological, physiological, and behavioral responses to stressors across different mobility spaces, which leads to road accidents. Recently, noncontact sensing-derived biosignals have been explored in mental health assessment. However, camera-based biosignals in mobility environments is still challenging. In this study, we aim to classify the driver stress using imaging photoplethysmography (iPPG) signals, facial keypoints, and fusion-based convolutional neural network (CNN). For this, we acquired infrared facial videos from healthy subjects (<italic>N</i>=20) in simulated driving. iPPG signals and facial keypoints were extracted using the local group invariance method and CNN, respectively. The iPPG signals were processed with a 1-D CNN, and facial keypoints with a 2-D CNN for feature learning. The proposed approach is able to classify between the drivers' stress states. Experimental results show that the proposed fusion approach achieved an mean classification accuracy (ACC) and F1-score of 87.00% and 86.33%, respectively. The iPPG signals demonstrated a better mean ACC (90.00%) and F1-score (90.33%) among the individual models. Thus, the framework could be extended for driver stress detection in real-time scenarios enabling early stress detection.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 9","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Intelligent Camera-Based Contactless Driver Stress State Monitoring Using Multimodality Fusion\",\"authors\":\"Swarubini P J;Thomas M. Deserno;Nagarajan Ganapathy\",\"doi\":\"10.1109/LSENS.2025.3595917\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Driver stress involves complex psychological, physiological, and behavioral responses to stressors across different mobility spaces, which leads to road accidents. Recently, noncontact sensing-derived biosignals have been explored in mental health assessment. However, camera-based biosignals in mobility environments is still challenging. In this study, we aim to classify the driver stress using imaging photoplethysmography (iPPG) signals, facial keypoints, and fusion-based convolutional neural network (CNN). For this, we acquired infrared facial videos from healthy subjects (<italic>N</i>=20) in simulated driving. iPPG signals and facial keypoints were extracted using the local group invariance method and CNN, respectively. The iPPG signals were processed with a 1-D CNN, and facial keypoints with a 2-D CNN for feature learning. The proposed approach is able to classify between the drivers' stress states. Experimental results show that the proposed fusion approach achieved an mean classification accuracy (ACC) and F1-score of 87.00% and 86.33%, respectively. The iPPG signals demonstrated a better mean ACC (90.00%) and F1-score (90.33%) among the individual models. Thus, the framework could be extended for driver stress detection in real-time scenarios enabling early stress detection.\",\"PeriodicalId\":13014,\"journal\":{\"name\":\"IEEE Sensors Letters\",\"volume\":\"9 9\",\"pages\":\"1-4\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Sensors Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11113323/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11113323/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
An Intelligent Camera-Based Contactless Driver Stress State Monitoring Using Multimodality Fusion
Driver stress involves complex psychological, physiological, and behavioral responses to stressors across different mobility spaces, which leads to road accidents. Recently, noncontact sensing-derived biosignals have been explored in mental health assessment. However, camera-based biosignals in mobility environments is still challenging. In this study, we aim to classify the driver stress using imaging photoplethysmography (iPPG) signals, facial keypoints, and fusion-based convolutional neural network (CNN). For this, we acquired infrared facial videos from healthy subjects (N=20) in simulated driving. iPPG signals and facial keypoints were extracted using the local group invariance method and CNN, respectively. The iPPG signals were processed with a 1-D CNN, and facial keypoints with a 2-D CNN for feature learning. The proposed approach is able to classify between the drivers' stress states. Experimental results show that the proposed fusion approach achieved an mean classification accuracy (ACC) and F1-score of 87.00% and 86.33%, respectively. The iPPG signals demonstrated a better mean ACC (90.00%) and F1-score (90.33%) among the individual models. Thus, the framework could be extended for driver stress detection in real-time scenarios enabling early stress detection.