Action and Object Interaction Recognition for Driver Activity Classification

Patrick Weyers, David Schiebener, A. Kummert
{"title":"Action and Object Interaction Recognition for Driver Activity Classification","authors":"Patrick Weyers, David Schiebener, A. Kummert","doi":"10.1109/ITSC.2019.8917139","DOIUrl":null,"url":null,"abstract":"Knowing what the driver is doing inside a vehicle is essential information for all stages of vehicle automation. For example it can be used for adaptive warning strategies in combination with an advanced driver assistance systems system, for predicting the response time to take back the control of a partially automated vehicle, or ensuring the driver is ready to manually drive a highly automated vehicle in the future. We present a system for driver activity recognition based on image sequences of an in-cabin time-of-flight camera. Our dataset includes actions such as entering and leaving a car or driver object interactions such as using a phone or drinking. In the first stage, we localize body key points of the driver. In the second stage, we extract image regions around the localized hands. These regions and the determined 3D body key points are used as the input to a recurrent neural network for driver activity recognition. With a mean average precision of 0.85 we reach better classification rates than approaches relying only on body key points or images.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"56 1","pages":"4336-4341"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC.2019.8917139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

Knowing what the driver is doing inside a vehicle is essential information for all stages of vehicle automation. For example it can be used for adaptive warning strategies in combination with an advanced driver assistance systems system, for predicting the response time to take back the control of a partially automated vehicle, or ensuring the driver is ready to manually drive a highly automated vehicle in the future. We present a system for driver activity recognition based on image sequences of an in-cabin time-of-flight camera. Our dataset includes actions such as entering and leaving a car or driver object interactions such as using a phone or drinking. In the first stage, we localize body key points of the driver. In the second stage, we extract image regions around the localized hands. These regions and the determined 3D body key points are used as the input to a recurrent neural network for driver activity recognition. With a mean average precision of 0.85 we reach better classification rates than approaches relying only on body key points or images.
面向驾驶员活动分类的动作与目标交互识别
了解驾驶员在车内的行为对于车辆自动化的各个阶段都是至关重要的信息。例如,它可以与先进的驾驶员辅助系统系统相结合,用于自适应警告策略,用于预测收回部分自动化车辆控制的响应时间,或确保驾驶员准备好在未来手动驾驶高度自动化的车辆。提出了一种基于舱内飞行时间相机图像序列的驾驶员活动识别系统。我们的数据集包括诸如进出汽车或驾驶员对象交互之类的动作,例如使用电话或饮酒。在第一阶段,我们对驾驶员的身体关键点进行定位。在第二阶段,我们提取定位手周围的图像区域。这些区域和确定的3D车身关键点被用作循环神经网络的输入,用于驾驶员活动识别。与仅依赖身体关键点或图像的方法相比,我们的平均精度为0.85,达到了更好的分类率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信