车载摄像头感知和跟踪车辆

J. M. Collado, C. Hilario, J. M. Armingol, A. D. L. Escalera
{"title":"车载摄像头感知和跟踪车辆","authors":"J. M. Collado, C. Hilario, J. M. Armingol, A. D. L. Escalera","doi":"10.5220/0002066600570066","DOIUrl":null,"url":null,"abstract":"In this paper a visual perception system for Intelligent Vehicles is presented. The goal of the system is to perceive the surroundings of the vehicle looking for other vehicles. Depending on when and where they have to be detected (overtaking, at long range) the system analyses movement or uses a vehicle geometrical model to perceive them. Later, the vehicles are tracked. The algorithm takes into account the information of the road lanes in order to apply some geometric restrictions. Additionally, a multi-resolution approach is used to speed up the algorithm allowing real-time working. Examples of real images show the validation of the algorithm. 1 Perception in Intelligent Vehicles Human errors are the cause of most of traffic accidents, therefore can be reduced but not completely eliminated with educational campaigns. That is why the introduction of environment analysis by sensors is being researched. These perception systems receive the name of Advanced Driver Assistance Systems (ADAS) and it is expected that will be able to reduce the number, danger and severity of traffic accidents. Several ADAS, which nowadays are being researched for Intelligent Vehicles, are based on Computer Vision, among others Adaptive Cruise Control (ACC), which has to detect and track other vehicles. Now, commercial equipments are based on distance sensors like radars or LIDARs. Both types of sensors have the advantages of providing a direct distance measurement of the obstacles in front of the vehicle, are easily integrated with the vehicle control, are able to work under bad weather conditions, and lighting conditions do not affect them very much. The economical cost for LIDARs and a narrow field of view of radars are inconveniences that make Computer Vision (CV) an alternative or complementary sensor. Although it is not able to work under bad weather conditions and its information is much difficult to process, it gives a richer description of the environment that surrounds the vehicle. From the point of view of CV, the research on vehicle detection based on an onboard system can be classified in three main groups. Bottom-up or feature-based, where the algorithms looked sequentially for some features that define a vehicle. But they have two drawbacks: the vehicle is lost if one feature is not enough present in the image and false tracks can deceive the algorithm. Top-down or model-based, where there are one or several models of vehicles and the best model is found in the image through a likelihood function. They are more robust than the previous algorithms, but slower. The algorithm presented in this paper follows this approach. The third approach is learning-based. Mainly, they are based on Neural Networks (NN). Many images are needed to train the network. They are usually used together with a bottom-up algorithm to check if a vehicle has been actually detected. Otherwise, they have to scan the whole image and they are very slow. A previous detection of the road limits is done in [1]. After that, the shadow under the vehicles is looked for. Symmetry and vertical edges confirm if there is a vehicle. In [2] symmetry and an elastic net are used to find vehicles. Interesting zones in the image are localized in [3] using Local Orientation Coding. A Back-propagation NN confirms or rejects the presence of a vehicle. Shadow, entropy and symmetry are used in [4]. Symmetry is used in [5] to determine the column of the image where the vehicle is. After that, they look for an U-form pattern to find the vehicle. The tracking is performed with correlation. In [6] overtaking vehicles are detected through image difference and the other vehicles through correlation. Several 3D models of vehicles are used in [7]. The road limits are calculated and the geometrical relationship between the camera and the road is known. Preceding vehicles are detected in [8]. They calculate a discriminant function through examples. A different way of reviewing the research on vehicle detection based on optical sensors can be found in [9]. The review has shown some important aspects. First, the module in charge of detecting other vehicles has to exchange information with the lane detection module. The regions where vehicles can appear are delimited and some geometric restrictions can be applied. The number of false positives can be reduced and the algorithm speeds up. Moreover, the detection of road limits can be more robust as this module can deal with partial occlusions produced by vehicles. Second, vehicle appearance changes with distance and position respect to the camera. A model-based approach is not useful to detect over-taking vehicles which are not fully seen in the image, and a vehicle that is far away shows a low apparent speed in the image. Several areas in the image have to be defined in order to specify where, how and what is going to be looked for in the image. Third, the algorithm not only has to detect vehicles but to track them and specify their state. These three points define the structure of the paper. 2 Different Areas and Vehicle Appearance Different features define the same vehicle depending on the area of the image where it appears. As it is shown in Fig. 1, lateral areas of the images are the only ones where overtaking vehicles can appear. Depending on the country, overtaking vehicles will appear on the left/right lane, and overtaken vehicles on the right/left one. A modelbased approach is difficult to implement and it is better to use a feature-based approach, mainly taking movement into account. A different case is when the vehicle is in front of the camera. The rear part of the vehicle is full seen in the image and a model-based approach is possible. Beside these areas, there is another corresponding to the vehicles have just over-taken ours. The rear part of the vehicle is completely seen in the image, although a small deformation due to projective distortion appears. 58","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"15 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"On board camera perception and tracking of vehicles\",\"authors\":\"J. M. Collado, C. Hilario, J. M. Armingol, A. D. L. Escalera\",\"doi\":\"10.5220/0002066600570066\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper a visual perception system for Intelligent Vehicles is presented. The goal of the system is to perceive the surroundings of the vehicle looking for other vehicles. Depending on when and where they have to be detected (overtaking, at long range) the system analyses movement or uses a vehicle geometrical model to perceive them. Later, the vehicles are tracked. The algorithm takes into account the information of the road lanes in order to apply some geometric restrictions. Additionally, a multi-resolution approach is used to speed up the algorithm allowing real-time working. Examples of real images show the validation of the algorithm. 1 Perception in Intelligent Vehicles Human errors are the cause of most of traffic accidents, therefore can be reduced but not completely eliminated with educational campaigns. That is why the introduction of environment analysis by sensors is being researched. These perception systems receive the name of Advanced Driver Assistance Systems (ADAS) and it is expected that will be able to reduce the number, danger and severity of traffic accidents. Several ADAS, which nowadays are being researched for Intelligent Vehicles, are based on Computer Vision, among others Adaptive Cruise Control (ACC), which has to detect and track other vehicles. Now, commercial equipments are based on distance sensors like radars or LIDARs. Both types of sensors have the advantages of providing a direct distance measurement of the obstacles in front of the vehicle, are easily integrated with the vehicle control, are able to work under bad weather conditions, and lighting conditions do not affect them very much. The economical cost for LIDARs and a narrow field of view of radars are inconveniences that make Computer Vision (CV) an alternative or complementary sensor. Although it is not able to work under bad weather conditions and its information is much difficult to process, it gives a richer description of the environment that surrounds the vehicle. From the point of view of CV, the research on vehicle detection based on an onboard system can be classified in three main groups. Bottom-up or feature-based, where the algorithms looked sequentially for some features that define a vehicle. But they have two drawbacks: the vehicle is lost if one feature is not enough present in the image and false tracks can deceive the algorithm. Top-down or model-based, where there are one or several models of vehicles and the best model is found in the image through a likelihood function. They are more robust than the previous algorithms, but slower. The algorithm presented in this paper follows this approach. The third approach is learning-based. Mainly, they are based on Neural Networks (NN). Many images are needed to train the network. They are usually used together with a bottom-up algorithm to check if a vehicle has been actually detected. Otherwise, they have to scan the whole image and they are very slow. A previous detection of the road limits is done in [1]. After that, the shadow under the vehicles is looked for. Symmetry and vertical edges confirm if there is a vehicle. In [2] symmetry and an elastic net are used to find vehicles. Interesting zones in the image are localized in [3] using Local Orientation Coding. A Back-propagation NN confirms or rejects the presence of a vehicle. Shadow, entropy and symmetry are used in [4]. Symmetry is used in [5] to determine the column of the image where the vehicle is. After that, they look for an U-form pattern to find the vehicle. The tracking is performed with correlation. In [6] overtaking vehicles are detected through image difference and the other vehicles through correlation. Several 3D models of vehicles are used in [7]. The road limits are calculated and the geometrical relationship between the camera and the road is known. Preceding vehicles are detected in [8]. They calculate a discriminant function through examples. A different way of reviewing the research on vehicle detection based on optical sensors can be found in [9]. The review has shown some important aspects. First, the module in charge of detecting other vehicles has to exchange information with the lane detection module. The regions where vehicles can appear are delimited and some geometric restrictions can be applied. The number of false positives can be reduced and the algorithm speeds up. Moreover, the detection of road limits can be more robust as this module can deal with partial occlusions produced by vehicles. Second, vehicle appearance changes with distance and position respect to the camera. A model-based approach is not useful to detect over-taking vehicles which are not fully seen in the image, and a vehicle that is far away shows a low apparent speed in the image. Several areas in the image have to be defined in order to specify where, how and what is going to be looked for in the image. Third, the algorithm not only has to detect vehicles but to track them and specify their state. These three points define the structure of the paper. 2 Different Areas and Vehicle Appearance Different features define the same vehicle depending on the area of the image where it appears. As it is shown in Fig. 1, lateral areas of the images are the only ones where overtaking vehicles can appear. Depending on the country, overtaking vehicles will appear on the left/right lane, and overtaken vehicles on the right/left one. A modelbased approach is difficult to implement and it is better to use a feature-based approach, mainly taking movement into account. A different case is when the vehicle is in front of the camera. The rear part of the vehicle is full seen in the image and a model-based approach is possible. Beside these areas, there is another corresponding to the vehicles have just over-taken ours. The rear part of the vehicle is completely seen in the image, although a small deformation due to projective distortion appears. 58\",\"PeriodicalId\":411140,\"journal\":{\"name\":\"International Conference on Computer Vision Theory and Applications\",\"volume\":\"15 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Computer Vision Theory and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0002066600570066\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer Vision Theory and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0002066600570066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

本文提出了一种智能汽车视觉感知系统。该系统的目标是感知车辆周围环境,寻找其他车辆。根据需要检测到的时间和地点(超车、远距离),系统会分析车辆的运动或使用车辆的几何模型来感知它们。随后,车辆被跟踪。该算法考虑了道路的车道信息,以施加一定的几何约束。此外,采用多分辨率方法加快了算法的速度,实现了实时工作。通过实例验证了算法的有效性。人为错误是大多数交通事故的原因,因此可以通过教育活动来减少但不能完全消除。这就是为什么正在研究引入传感器环境分析的原因。这些感知系统被称为高级驾驶辅助系统(ADAS),预计将能够减少交通事故的数量、危险和严重程度。目前正在为智能汽车研究的几种ADAS都是基于计算机视觉的,其中包括自适应巡航控制(ACC),它必须检测和跟踪其他车辆。现在,商业设备是基于距离传感器,如雷达或激光雷达。这两种类型的传感器都具有提供车辆前方障碍物的直接距离测量的优点,易于与车辆控制集成,能够在恶劣天气条件下工作,并且照明条件不会对它们产生太大影响。激光雷达的经济成本和雷达狭窄的视场是不便的,使计算机视觉(CV)成为替代或补充传感器。虽然它不能在恶劣的天气条件下工作,而且它的信息很难处理,但它对车辆周围的环境给出了更丰富的描述。从CV的角度来看,基于车载系统的车辆检测研究可以分为三大类。自下而上或基于特征的方法,即算法依次寻找定义车辆的一些特征。但它们有两个缺点:如果图像中没有足够的特征,车辆就会丢失,而虚假的轨迹可能会欺骗算法。自上而下或基于模型,其中有一个或多个车辆模型,并通过似然函数在图像中找到最佳模型。它们比以前的算法更健壮,但速度更慢。本文提出的算法遵循了这一方法。第三种方法是基于学习的。它们主要基于神经网络(NN)。训练网络需要很多图像。它们通常与自下而上的算法一起使用,以检查车辆是否被实际检测到。否则,它们必须扫描整个图像,而且速度很慢。先前的道路限制检测在[1]中完成。然后,寻找车辆下的阴影。对称和垂直边缘确认是否有车辆。在[2]中,对称和弹性网被用来寻找车辆。在[3]中使用局部方向编码对图像中感兴趣的区域进行定位。反向传播神经网络确认或拒绝车辆的存在。在[4]中使用了阴影、熵和对称性。在[5]中使用对称性来确定车辆所在的图像列。在那之后,他们寻找u型模式来找到车辆。跟踪是通过相关性执行的。在[6]中,通过图像差分检测超车,通过相关检测其他车辆。[7]中使用了几种车辆的3D模型。计算出道路边界,并知道相机与道路之间的几何关系。在[8]中检测到前车。他们通过实例计算了一个判别函数。回顾基于光学传感器的车辆检测研究的另一种方式可以在[9]中找到。审查显示了一些重要方面。首先,负责检测其他车辆的模块必须与车道检测模块交换信息。车辆可以出现的区域被划定,并且可以应用一些几何限制。可以减少误报的数量,提高算法的速度。此外,由于该模块可以处理车辆产生的部分遮挡,因此对道路限制的检测可以更加稳健。其次,车辆外观随着距离和相对于相机的位置而变化。基于模型的方法对于检测图像中未完全看到的超车车辆是无用的,并且距离较远的车辆在图像中显示的视速度较低。必须定义图像中的几个区域,以便指定要在图像中查找的位置、方式和内容。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On board camera perception and tracking of vehicles
In this paper a visual perception system for Intelligent Vehicles is presented. The goal of the system is to perceive the surroundings of the vehicle looking for other vehicles. Depending on when and where they have to be detected (overtaking, at long range) the system analyses movement or uses a vehicle geometrical model to perceive them. Later, the vehicles are tracked. The algorithm takes into account the information of the road lanes in order to apply some geometric restrictions. Additionally, a multi-resolution approach is used to speed up the algorithm allowing real-time working. Examples of real images show the validation of the algorithm. 1 Perception in Intelligent Vehicles Human errors are the cause of most of traffic accidents, therefore can be reduced but not completely eliminated with educational campaigns. That is why the introduction of environment analysis by sensors is being researched. These perception systems receive the name of Advanced Driver Assistance Systems (ADAS) and it is expected that will be able to reduce the number, danger and severity of traffic accidents. Several ADAS, which nowadays are being researched for Intelligent Vehicles, are based on Computer Vision, among others Adaptive Cruise Control (ACC), which has to detect and track other vehicles. Now, commercial equipments are based on distance sensors like radars or LIDARs. Both types of sensors have the advantages of providing a direct distance measurement of the obstacles in front of the vehicle, are easily integrated with the vehicle control, are able to work under bad weather conditions, and lighting conditions do not affect them very much. The economical cost for LIDARs and a narrow field of view of radars are inconveniences that make Computer Vision (CV) an alternative or complementary sensor. Although it is not able to work under bad weather conditions and its information is much difficult to process, it gives a richer description of the environment that surrounds the vehicle. From the point of view of CV, the research on vehicle detection based on an onboard system can be classified in three main groups. Bottom-up or feature-based, where the algorithms looked sequentially for some features that define a vehicle. But they have two drawbacks: the vehicle is lost if one feature is not enough present in the image and false tracks can deceive the algorithm. Top-down or model-based, where there are one or several models of vehicles and the best model is found in the image through a likelihood function. They are more robust than the previous algorithms, but slower. The algorithm presented in this paper follows this approach. The third approach is learning-based. Mainly, they are based on Neural Networks (NN). Many images are needed to train the network. They are usually used together with a bottom-up algorithm to check if a vehicle has been actually detected. Otherwise, they have to scan the whole image and they are very slow. A previous detection of the road limits is done in [1]. After that, the shadow under the vehicles is looked for. Symmetry and vertical edges confirm if there is a vehicle. In [2] symmetry and an elastic net are used to find vehicles. Interesting zones in the image are localized in [3] using Local Orientation Coding. A Back-propagation NN confirms or rejects the presence of a vehicle. Shadow, entropy and symmetry are used in [4]. Symmetry is used in [5] to determine the column of the image where the vehicle is. After that, they look for an U-form pattern to find the vehicle. The tracking is performed with correlation. In [6] overtaking vehicles are detected through image difference and the other vehicles through correlation. Several 3D models of vehicles are used in [7]. The road limits are calculated and the geometrical relationship between the camera and the road is known. Preceding vehicles are detected in [8]. They calculate a discriminant function through examples. A different way of reviewing the research on vehicle detection based on optical sensors can be found in [9]. The review has shown some important aspects. First, the module in charge of detecting other vehicles has to exchange information with the lane detection module. The regions where vehicles can appear are delimited and some geometric restrictions can be applied. The number of false positives can be reduced and the algorithm speeds up. Moreover, the detection of road limits can be more robust as this module can deal with partial occlusions produced by vehicles. Second, vehicle appearance changes with distance and position respect to the camera. A model-based approach is not useful to detect over-taking vehicles which are not fully seen in the image, and a vehicle that is far away shows a low apparent speed in the image. Several areas in the image have to be defined in order to specify where, how and what is going to be looked for in the image. Third, the algorithm not only has to detect vehicles but to track them and specify their state. These three points define the structure of the paper. 2 Different Areas and Vehicle Appearance Different features define the same vehicle depending on the area of the image where it appears. As it is shown in Fig. 1, lateral areas of the images are the only ones where overtaking vehicles can appear. Depending on the country, overtaking vehicles will appear on the left/right lane, and overtaken vehicles on the right/left one. A modelbased approach is difficult to implement and it is better to use a feature-based approach, mainly taking movement into account. A different case is when the vehicle is in front of the camera. The rear part of the vehicle is full seen in the image and a model-based approach is possible. Beside these areas, there is another corresponding to the vehicles have just over-taken ours. The rear part of the vehicle is completely seen in the image, although a small deformation due to projective distortion appears. 58
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信