A Performance Comparison of Deep Learning Methods for Real-time Localisation of Vehicle Lights in Video Frames

C. Rapson, Boon-Chong Seet, M. Naeem, J. Lee, R. Klette
{"title":"A Performance Comparison of Deep Learning Methods for Real-time Localisation of Vehicle Lights in Video Frames","authors":"C. Rapson, Boon-Chong Seet, M. Naeem, J. Lee, R. Klette","doi":"10.1109/ITSC.2019.8917087","DOIUrl":null,"url":null,"abstract":"A vehicle’s braking lights can help to infer its future trajectory. Visible light communication using vehicle lights can also transmit other safety information to assist drivers with collision avoidance (whether the drivers be human or autonomous). Both these use cases require accurate localisation of vehicle lights by computer vision. Due to the large variation in lighting conditions (day, night, fog, snow, etc), the shape and brightness of the light itself, as well as difficulties with occlusions and perspectives, conventional methods are challenging and deep learning is a promising strategy. This paper presents a comparison of deep learning methods which are selected based on their potential to evaluate real-time video. The detection accuracy is shown to have a strong dependence on the size of the vehicle light within the image. A cascading approach is taken, where a downsampled image is used to detect vehicles, and then a second routine searches for vehicle lights at higher resolution within these Regions of Interest. This approach is demonstrated to improve detection, especially for small objects. Using YOLOv3 for the first stage and Tiny_YOLO for the second stage achieves satisfactory results across a wide range of conditions, and can execute at 37 frames per second. The ground truth for training and evaluating the methods is available for other researchers to use and compare their results.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"153 1","pages":"567-572"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC.2019.8917087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

A vehicle’s braking lights can help to infer its future trajectory. Visible light communication using vehicle lights can also transmit other safety information to assist drivers with collision avoidance (whether the drivers be human or autonomous). Both these use cases require accurate localisation of vehicle lights by computer vision. Due to the large variation in lighting conditions (day, night, fog, snow, etc), the shape and brightness of the light itself, as well as difficulties with occlusions and perspectives, conventional methods are challenging and deep learning is a promising strategy. This paper presents a comparison of deep learning methods which are selected based on their potential to evaluate real-time video. The detection accuracy is shown to have a strong dependence on the size of the vehicle light within the image. A cascading approach is taken, where a downsampled image is used to detect vehicles, and then a second routine searches for vehicle lights at higher resolution within these Regions of Interest. This approach is demonstrated to improve detection, especially for small objects. Using YOLOv3 for the first stage and Tiny_YOLO for the second stage achieves satisfactory results across a wide range of conditions, and can execute at 37 frames per second. The ground truth for training and evaluating the methods is available for other researchers to use and compare their results.
视频帧中车辆灯光实时定位的深度学习方法性能比较
车辆的刹车灯可以帮助推断其未来的轨迹。使用车灯的可见光通信还可以传输其他安全信息,以帮助驾驶员避免碰撞(无论驾驶员是人类还是自动驾驶)。这两种用例都需要通过计算机视觉精确定位车辆灯光。由于光照条件(白天、夜晚、雾、雪等)、光线本身的形状和亮度的巨大变化,以及遮挡和透视的困难,传统方法具有挑战性,深度学习是一种很有前途的策略。本文介绍了基于评估实时视频的潜力而选择的深度学习方法的比较。检测精度显示有很强的依赖于图像内的车辆光的大小。采用级联方法,使用降采样图像来检测车辆,然后在这些感兴趣的区域内以更高分辨率进行第二次例行搜索车辆灯光。这种方法被证明可以改善检测,特别是对小物体。在第一阶段使用YOLOv3,在第二阶段使用Tiny_YOLO,可以在广泛的条件下获得令人满意的结果,并且可以以每秒37帧的速度执行。训练和评估方法的基本事实可供其他研究人员使用和比较他们的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信