Understanding head and hand activities and coordination in naturalistic driving videos

Sujitha Martin, Eshed Ohn-Bar, Ashish Tawari, M. Trivedi
{"title":"Understanding head and hand activities and coordination in naturalistic driving videos","authors":"Sujitha Martin, Eshed Ohn-Bar, Ashish Tawari, M. Trivedi","doi":"10.1109/IVS.2014.6856610","DOIUrl":null,"url":null,"abstract":"In this work, we propose a vision-based analysis framework for recognizing in-vehicle activities such as interactions with the steering wheel, the instrument cluster and the gear. The framework leverages two views for activity analysis, a camera looking at the driver's hand and another looking at the driver's head. The techniques proposed can be used by researchers in order to extract `mid-level' information from video, which is information that represents some semantic understanding of the scene but may still require an expert in order to distinguish difficult cases or leverage the cues to perform drive analysis. Unlike such information, `low-level' video is large in quantity and can't be used unless processed entirely by an expert. This work can apply to minimizing manual labor so that researchers may better benefit from the accessibility of the data and provide them with the ability to perform larger-scaled studies.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Intelligent Vehicles Symposium Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2014.6856610","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40

Abstract

In this work, we propose a vision-based analysis framework for recognizing in-vehicle activities such as interactions with the steering wheel, the instrument cluster and the gear. The framework leverages two views for activity analysis, a camera looking at the driver's hand and another looking at the driver's head. The techniques proposed can be used by researchers in order to extract `mid-level' information from video, which is information that represents some semantic understanding of the scene but may still require an expert in order to distinguish difficult cases or leverage the cues to perform drive analysis. Unlike such information, `low-level' video is large in quantity and can't be used unless processed entirely by an expert. This work can apply to minimizing manual labor so that researchers may better benefit from the accessibility of the data and provide them with the ability to perform larger-scaled studies.
了解在自然驾驶视频中头部和手部的活动和协调
在这项工作中,我们提出了一个基于视觉的分析框架,用于识别车内活动,如与方向盘、仪表盘和齿轮的相互作用。该框架利用两个视图进行活动分析,一个摄像头看着司机的手,另一个摄像头看着司机的头。研究人员可以使用这些技术从视频中提取“中级”信息,这些信息代表了对场景的一些语义理解,但可能仍然需要专家来区分困难的情况或利用线索进行驱动分析。与此类信息不同的是,“低级”视频信息量很大,除非由专家完全处理,否则无法使用。这项工作可以最大限度地减少体力劳动,使研究人员可以更好地从数据的可访问性中受益,并为他们提供进行大规模研究的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信