Yantong Wang, Yu Gu, Tong Quan, Jiaoyun Yang, Mianxiong Dong, Ning An, Fuji Ren
{"title":"ViE-Take: A Vision-Driven Multi-Modal Dataset for Exploring the Emotional Landscape in Takeover Safety of Autonomous Driving.","authors":"Yantong Wang, Yu Gu, Tong Quan, Jiaoyun Yang, Mianxiong Dong, Ning An, Fuji Ren","doi":"10.34133/research.0603","DOIUrl":null,"url":null,"abstract":"<p><p>Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers' emotions in takeover safety, the lack of emotion-aware takeover datasets hinders further investigation, thereby constraining potential applications in this field. To this end, we introduce ViE-Take, the first Vision-driven (Vision is used since it constitutes the most cost-effective and user-friendly solution for commercial driver monitor systems) dataset for exploring the Emotional landscape in Takeovers of autonomous driving. ViE-Take enables a comprehensive exploration of the impact of emotions on drivers' takeover performance through 3 key attributes: multi-source emotion elicitation, multi-modal driver data collection, and multi-dimensional emotion annotations. To aid the use of ViE-Take, we provide 4 deep models (corresponding to 4 prevalent learning strategies) for predicting 3 different aspects of drivers' takeover performance (readiness, reaction time, and quality). These models offer benefits for various downstream tasks, such as driver emotion recognition and regulation for automobile manufacturers. Initial analysis and experiments conducted on ViE-Take indicate that (a) emotions have diverse impacts on takeover performance, some of which are counterintuitive; (b) highly expressive social media clips, despite their brevity, prove effective in eliciting emotions (a foundation for emotion regulation); and (c) predicting takeover performance solely through deep learning on vision data not only is feasible but also holds great potential.</p>","PeriodicalId":21120,"journal":{"name":"Research","volume":"8 ","pages":"0603"},"PeriodicalIF":11.0000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11908832/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.34133/research.0603","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0
Abstract
Takeover safety draws increasing attention in the intelligent transportation as the new energy vehicles with cutting-edge autopilot capabilities vigorously blossom on the road. Despite recent studies highlighting the importance of drivers' emotions in takeover safety, the lack of emotion-aware takeover datasets hinders further investigation, thereby constraining potential applications in this field. To this end, we introduce ViE-Take, the first Vision-driven (Vision is used since it constitutes the most cost-effective and user-friendly solution for commercial driver monitor systems) dataset for exploring the Emotional landscape in Takeovers of autonomous driving. ViE-Take enables a comprehensive exploration of the impact of emotions on drivers' takeover performance through 3 key attributes: multi-source emotion elicitation, multi-modal driver data collection, and multi-dimensional emotion annotations. To aid the use of ViE-Take, we provide 4 deep models (corresponding to 4 prevalent learning strategies) for predicting 3 different aspects of drivers' takeover performance (readiness, reaction time, and quality). These models offer benefits for various downstream tasks, such as driver emotion recognition and regulation for automobile manufacturers. Initial analysis and experiments conducted on ViE-Take indicate that (a) emotions have diverse impacts on takeover performance, some of which are counterintuitive; (b) highly expressive social media clips, despite their brevity, prove effective in eliciting emotions (a foundation for emotion regulation); and (c) predicting takeover performance solely through deep learning on vision data not only is feasible but also holds great potential.
期刊介绍:
Research serves as a global platform for academic exchange, collaboration, and technological advancements. This journal welcomes high-quality research contributions from any domain, with open arms to authors from around the globe.
Comprising fundamental research in the life and physical sciences, Research also highlights significant findings and issues in engineering and applied science. The journal proudly features original research articles, reviews, perspectives, and editorials, fostering a diverse and dynamic scholarly environment.