U. Scheunert, Philipp Lindner, E. Richter, T. Tatschke, Dominik Schestauber, E. Fuchs, G. Wanielik
{"title":"Early and Multi Level Fusion for Reliable Automotive Safety Systems","authors":"U. Scheunert, Philipp Lindner, E. Richter, T. Tatschke, Dominik Schestauber, E. Fuchs, G. Wanielik","doi":"10.1109/IVS.2007.4290114","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290114","url":null,"abstract":"The fusion of data from different sensorial sources is today the most promising method to increase robustness and reliability of environmental perception. The project ProFusion2 pushes the sensor data fusion for automotive applications in the field of driver assistance systems. ProFusion2 was created to enhance fusion techniques and algorithms beyond the current state-of-the-art. It is a horizontal subproject in the Integrated Project PReVENT (funded by the EC). The paper presents two approaches concerning the detection of vehicles in road environments. An early fusion and a multi level fusion processing strategy are described. The common framework for the representation of the environment model and the representation of perception results is introduced. The key feature of this framework is the storing and representation of all data involved in one perception memory in a common data structure and the holistic accessibility.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114254891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient extrinsic calibration of a multiple laser scanners and cameras' sensor system on a mobile platform","authors":"Huijing Zhao, Yuzhong Chen, R. Shibasaki","doi":"10.1109/IVS.2007.4290151","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290151","url":null,"abstract":"This work is motivated by a development of a portable and low-cost solution for road mapping in downtown area using a number of laser scanners and video cameras that are mounted on an intelligent vehicle. Sensors on the vehicle platform are considered to be removable, so that extrinsic calibrations are required after each sensors' setting up. Extrinsic calibration might always happen at or near measuremental sites, so that the facilities such as specially marked large environment could not be supposed as a given in the process. In this research, we present a practical method for extrinsic calibration of multiple laser scanners and video cameras that are mounted on a vehicle platform. Referring to a fiducial coordinate system on vehicle platform, a constraint between the data of a laser scanner and of a video camera is established. It is solved in an iterative way to find a best solution from the laser scanner and from the video camera to the fiducial coordinate system. On the other hand, all laser scanners and video cameras are calibrated for each laser scanner and video camera pair that has common in feature points in a sequential way. An experiment is conducted using the data measured on a normal street road. Calibration results are demonstrated by fusing the sensor data into a global coordinate system.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129943159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracking with Stereo-vision System for Low Speed Following Applications","authors":"J. Morat, F. Devernay, S. Cornou","doi":"10.1109/IVS.2007.4290240","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290240","url":null,"abstract":"Research in adaptative cruise control (ACC) is currently one of the most important topics in the field of intelligent transportation systems. The main challenge is to perceive the environment, especially at low speed. In this paper, we present a novel approach to track the 3D trajectory and speed of the obstacles and the surrounding vehicles through a stereo-vision system. This tracking method extends the classical patch-based Lucas-Kanade algorithm by integrating the geometric constraints of the stereo system into the motion model: the epipolar constraint, which enforces the tracked patches to remain on the epipolar lines, and the magnification constraint, which links the disparity of the tracked patches to the apparent size of these patches. We report experimental results on simulated and real data showing the improvement in accuracy and robustness of our algorithm compared to the classical Lucas-Kanade tracker.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"1077 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128285205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent Monitoring and Adaptive Competence Assignment for Driver and Vehicle","authors":"Fabian Mueller, A. Wenzel","doi":"10.1109/IVS.2007.4290264","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290264","url":null,"abstract":"This paper presents a concept for achieving higher safety in road traffic by addressing long and short term reduced capability of the driver in regard to his driving task. Pivot point of the proposed concept is the continuous monitoring of the driver and the vehicle's state and their competence assignment. For this purpose, the driver and vehicle will be treated as one system and regarded under cybernetic aspects. For estimation of the driver's condition or capability respectively, an approach is suggested, which combines data of the vehicle sensory equipment with methods for monitoring drivers.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123854647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"H∞ Filter Design for Vehicle Tracking Under Delayed and Noisy Measurements","authors":"S. Ezercan, H. Ozbay","doi":"10.1109/IVS.2007.4290296","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290296","url":null,"abstract":"In many intelligent vehicles applications tracking plays an important role. This paper considers tracking of a vehicle under delayed and noisy measurements. For this purpose we design an H∞ optimal filter for linear systems with time delays in the state and output variables. By using the duality between filtering and control, the problem at hand is transformed to a robust controller design for systems with time delays. The skew Toeplitz method developed earlier for the robust control of infinite dimensional systems is used to solve the H∞ filtering problem. The results are illustrated with simulations and effects of the time delay on the tracking performance are demonstrated.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123493483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Night Vision Module for the Detection of Distant Pedestrians","authors":"M. Bertozzi, A. Broggi, S. Ghidoni, M. Meinecke","doi":"10.1109/IVS.2007.4290086","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290086","url":null,"abstract":"This paper presents a monocular night vision system specifically developed for detecting very distant pedestrians. The focus of the system is the recognition of pedestrians that are between 40 and 100 m away from the camera. The system is intended to integrate with an existing system, which is capable of detecting pedestrians at distances less than 40 m. At very large distances, pedestrians appear at low resolution, and this requires a specific detection algorithm, rather than an adaptation of an existing one. The presented system performs best in rural environments, where it can locate pedestrians at such great distances, that the pedestrians are hardly visible even to a human driver.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"33 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114115142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A switched optimized approach for road-departure avoidance: implementation results","authors":"N. Minoiu, M. Netto, S. Mammar, B. Lusetti","doi":"10.1109/IVS.2007.4290212","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290212","url":null,"abstract":"We present in this contribution the design and implementation of a steering assistance control. The main goal is to avoid the lane departure in case the driver loses attention. This control has been developed to keep the vehicle's trajectory within certain bounds during the assistance intervention, while maintaining a limited torque control input. In order to achieve this goal we have employed both Lyapunov theory and LMI optimization methods.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114877943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grayscale Correlation based 3D Model Fitting for Occupant Head Detection and Tracking","authors":"Zhencheng Hu, T. Kawamura, K. Uchimura","doi":"10.1109/IVS.2007.4290290","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290290","url":null,"abstract":"Occupants inside the vehicle can be deadly injured by the deployment of airbag at the time of crash. New collision safety technology requires classifying the occupant and tracking their position in real-time in order to adaptively deploy the air bag. This paper presents a fast 3D model fitting algorithm based on grayscale correlation of stereo disparity data, to detect and track occupant head position. The proposed system uses stereo vision with IR illumination for depth data acquisition. By detecting body center line and extra-near disparity calculation, this method is proven to be robust and accurate in variant lighting condition and occupant movement. Evaluation of the method shows over 98% correct head detection and near 100% correctness with head tracking.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132247064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Shibuhisa, J. Sato, T. Takahashi, I. Ide, H. Murase, Y. Kojima, A. Takahashi
{"title":"Accurate Vehicle Localization using DTW between Range Data Map and Laser Scanner Data Sequences","authors":"N. Shibuhisa, J. Sato, T. Takahashi, I. Ide, H. Murase, Y. Kojima, A. Takahashi","doi":"10.1109/IVS.2007.4290243","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290243","url":null,"abstract":"We propose a method for accurate vehicle localization. The proposed method detects a vehicle's location and traveling lane by matching between a pre-constructed range data map and laser scanner data sequences measured while the vehicle runs. The range data map consists of an absolute position on the road and range data at the position. We use dynamic time warping (DTW) to align multiple range data sequences. Experiments using 40 data sequences collected while a vehicle ran on the same route with multiple traffic lanes were conducted. The results demonstrated the effectiveness of vehicle localization and traveling lane classification.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"4 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133105990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereo based Obstacle Detection with Uncertainty in Rough Terrain","authors":"W. van der Mark, J. C. van den Heuvel, F. Groen","doi":"10.1109/IVS.2007.4290248","DOIUrl":"https://doi.org/10.1109/IVS.2007.4290248","url":null,"abstract":"Autonomous robot vehicles that operate in off-road terrain should avoid obstacle hazards. In this paper we present a stereo vision based method that is able to cluster reconstructed terrain points into obstacles by evaluating their relative angles and distances. In our approach, constraints are enforced on these geometric properties by a set of pixel threshold values. Because these values are all computed during an initialisation step, only simple pixel threshold operations remain to be performed during the real-time obstacle detection. An advantage of this novel approach is that the distance uncertainties can be incorporated into the thresholds. Detected obstacle points are clustered into objects on the basis of their pixel connectivity. Objects with insufficient pixels, elevation and slope are rejected. Remaining non-obstacle pixels are regarded as ground surface points. They are used to update the orientation of the stereo camera relative to the ground surface. This prevents orientation errors during stereo reconstruction and the subsequent obstacle detection steps. Our results show the drawbacks of ignoring the uncertainties in the stereo distance estimates for obstacle detection. It leads to over-segmentation and increases the number of falsely detected obstacles. Because our method incorporates these uncertainties, it can detect more of the obstacle surface pixels at larger distances. This leads to significantly less false obstacle detections.","PeriodicalId":190903,"journal":{"name":"2007 IEEE Intelligent Vehicles Symposium","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133250151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}