Innovative Saliency based Deep Driving Scene Understanding System for Automatic Safety Assessment in Next-Generation Cars

F. Rundo, S. Conoci, S. Battiato, F. Trenta, C. Spampinato
{"title":"Innovative Saliency based Deep Driving Scene Understanding System for Automatic Safety Assessment in Next-Generation Cars","authors":"F. Rundo, S. Conoci, S. Battiato, F. Trenta, C. Spampinato","doi":"10.23919/AEITAUTOMOTIVE50086.2020.9307425","DOIUrl":null,"url":null,"abstract":"Visual saliency is the human attention mechanism that encodes such visio-sensing information to extract features from the observation scene. In the last few years, visual saliency estimation has received significant research interests in the automotive field. While driving the vehicle, the car driver focuses on specific objects rather than others by deterministic brain-driven saliency mechanisms inherent perceptual activity. In this study, we propose an intelligent system that combines a driver’s drowsiness detector with a saliency-based scene understanding pipeline. Specifically, we implemented ad-hoc 3D pre-trained Semantic Segmentation Deep Network to process the frames captured by automotive-grade camera device placed outside the car. We used an embedded platform based on the STA1295 core (ARM A7 Dual-Cores) with a hardware accelerator for hosting the proposed pipeline. Besides, we monitor the car driver’s drowsiness by using an innovative bio-sensor installed on the steering wheel, to collect the PhotoPlethysmoGraphy (PPG) signal. Ad-hoc 1D Temporal Deep Convolutional Network has been designed to classify the collected PPG time-series in order to assess the driver’s attention level. Finally, we compare the detected car driver’s attention level with corresponding saliency-based scene classification in order to assess the overall safety level. Experimental results confirm the effectiveness of the proposed pipeline.","PeriodicalId":104806,"journal":{"name":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/AEITAUTOMOTIVE50086.2020.9307425","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Visual saliency is the human attention mechanism that encodes such visio-sensing information to extract features from the observation scene. In the last few years, visual saliency estimation has received significant research interests in the automotive field. While driving the vehicle, the car driver focuses on specific objects rather than others by deterministic brain-driven saliency mechanisms inherent perceptual activity. In this study, we propose an intelligent system that combines a driver’s drowsiness detector with a saliency-based scene understanding pipeline. Specifically, we implemented ad-hoc 3D pre-trained Semantic Segmentation Deep Network to process the frames captured by automotive-grade camera device placed outside the car. We used an embedded platform based on the STA1295 core (ARM A7 Dual-Cores) with a hardware accelerator for hosting the proposed pipeline. Besides, we monitor the car driver’s drowsiness by using an innovative bio-sensor installed on the steering wheel, to collect the PhotoPlethysmoGraphy (PPG) signal. Ad-hoc 1D Temporal Deep Convolutional Network has been designed to classify the collected PPG time-series in order to assess the driver’s attention level. Finally, we compare the detected car driver’s attention level with corresponding saliency-based scene classification in order to assess the overall safety level. Experimental results confirm the effectiveness of the proposed pipeline.
基于创新显著性的下一代汽车自动安全评估深度驾驶场景理解系统
视觉显著性是人类对视觉感知信息进行编码,从而从观察场景中提取特征的注意机制。在过去的几年里,视觉显著性估计在汽车领域得到了很大的研究兴趣。在驾驶过程中,汽车驾驶员通过固有的感知活动的确定性脑驱动显著性机制将注意力集中在特定的物体上,而不是其他物体上。在这项研究中,我们提出了一个智能系统,将驾驶员的困倦检测器与基于显著性的场景理解管道相结合。具体来说,我们实现了ad-hoc 3D预训练语义分割深度网络来处理由放置在车外的汽车级相机设备捕获的帧。我们使用了一个基于STA1295内核(ARM A7双核)的嵌入式平台和一个硬件加速器来承载提议的管道。此外,我们通过安装在方向盘上的创新生物传感器来监测驾驶员的睡意,以收集光电体积脉搏波(PPG)信号。设计了Ad-hoc一维时间深度卷积网络,对收集到的PPG时间序列进行分类,以评估驾驶员的注意力水平。最后,我们将检测到的汽车驾驶员的注意力水平与相应的基于显著性的场景分类进行比较,以评估整体安全水平。实验结果证实了该管道的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信