Deriving Pattern in Driver's Observability in Road Turns & Traffic Lights: Eye-Tracking based Analysis

H. Venkataraman, M. Madhuri, R. Assfalg
{"title":"Deriving Pattern in Driver's Observability in Road Turns & Traffic Lights: Eye-Tracking based Analysis","authors":"H. Venkataraman, M. Madhuri, R. Assfalg","doi":"10.1145/3267195.3267196","DOIUrl":null,"url":null,"abstract":"As one move towards driverless cars, there will always be a big worry of how autonomous cars would behave in the presence of vehicles driven by humans. In the co-existence model, it is essential for autonomous systems to 'understand' the behavior and gazing patterns of the drivers across different road turns and traffic lights. It is essential to understand that each road turn in a city is different due to angle of turn, building environment, etc. Hence, one needs to understand the gaze patterns of drivers across different turns and traffic lights. This paper is a long-drawn effort for measuring driver's observability and deriving driver's observance pattern in real-time. In this regard, an experimental model is provided and clustering-based technique is applied that would measure driver observability. More than 100 segmented readings were extracted from the video of five different vehicles and drivers under two different road conditions. It was observed that - while taking left or right turns and waiting in traffic light, the focus of drivers in front of their cars also changes considerably, with significant variation in the driver's gaze point, both horizontally and vertically. This is a important pattern/result in the design of adaptive driver assistance system and future driverless cars.","PeriodicalId":185142,"journal":{"name":"Proceedings of the 1st International Workshop on Communication and Computing in Connected Vehicles and Platooning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Communication and Computing in Connected Vehicles and Platooning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3267195.3267196","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

As one move towards driverless cars, there will always be a big worry of how autonomous cars would behave in the presence of vehicles driven by humans. In the co-existence model, it is essential for autonomous systems to 'understand' the behavior and gazing patterns of the drivers across different road turns and traffic lights. It is essential to understand that each road turn in a city is different due to angle of turn, building environment, etc. Hence, one needs to understand the gaze patterns of drivers across different turns and traffic lights. This paper is a long-drawn effort for measuring driver's observability and deriving driver's observance pattern in real-time. In this regard, an experimental model is provided and clustering-based technique is applied that would measure driver observability. More than 100 segmented readings were extracted from the video of five different vehicles and drivers under two different road conditions. It was observed that - while taking left or right turns and waiting in traffic light, the focus of drivers in front of their cars also changes considerably, with significant variation in the driver's gaze point, both horizontally and vertically. This is a important pattern/result in the design of adaptive driver assistance system and future driverless cars.
道路转弯与交通信号灯中驾驶员可观察性模式的推导:基于眼动追踪的分析
在向无人驾驶汽车迈进的过程中,人们总是非常担心,在人类驾驶的车辆面前,自动驾驶汽车会如何表现。在共存模型中,自动驾驶系统必须“理解”驾驶员在不同道路转弯和红绿灯时的行为和凝视模式。我们必须了解城市中的每个道路转弯都因转弯角度、建筑环境等而有所不同。因此,我们需要了解司机在不同的转弯和红绿灯时的凝视模式。本文是一项长期的研究工作,旨在实时测量驾驶员的可观察性并推导驾驶员的观察模式。在这方面,提供了一个实验模型,并应用基于聚类的技术来测量驾驶员的可观察性。从五辆不同车辆和司机在两种不同路况下的视频中提取了100多个分段读数。我们观察到,在左转或右转、等红绿灯时,前方司机的视线也发生了很大的变化,无论在水平方向还是垂直方向,司机的视线都发生了很大的变化。这是自适应驾驶辅助系统和未来无人驾驶汽车设计的重要模式/成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信