{"title":"Preceding vehicle state prediction","authors":"Rohit Pandita, D. Caveney","doi":"10.1109/IVS.2013.6629597","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629597","url":null,"abstract":"A model-based approach is presented for predicting future state (position and velocity) of the preceding vehicle in response to velocity disturbance from lead vehicle in a platoon. Online parameter estimation is used to adapt model parameters based on characteristics of individual drivers in the platoon. A car-following model is used to describe platoon longitudinal dynamics. Examples are presented using simulated as well as real-traffic data.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114770349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaolong Xu, David Vázquez, Antonio M. López, J. Marín, D. Ponsa
{"title":"Learning a multiview part-based model in virtual world for pedestrian detection","authors":"Jiaolong Xu, David Vázquez, Antonio M. López, J. Marín, D. Ponsa","doi":"10.1109/IVS.2013.6629512","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629512","url":null,"abstract":"State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125649309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Making visual SLAM consistent with geo-referenced landmarks","authors":"Guillaume Bresson, R. Aufrère, R. Chapuis","doi":"10.1109/IVS.2013.6629525","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629525","url":null,"abstract":"This paper presents a solution to the consistency problem of SLAM algorithms. We propose here to model the drift affecting the estimation process. The divergence is seen as a bias on the vehicle localization. By using such a model, we are able to guarantee the consistency of the localization. We developed a filter taking into account the divergence and allowing to easily integrate any information helping to characterize the current drift. Geo-referenced landmarks are used in order to provide an absolute localization and drastically reduce the impact of the divergence. The filter is designed around an Extended Kalman Filter and is totally separated from the classical SLAM algorithm. Our method can consequently be connected to any existing SLAM process without trouble. A vehicle performing monocular SLAM in real time was used to validate our approach with real data. The results show that the integrity of the filter is preserved during the whole trajectory and that geo-referenced information helps reducing the natural SLAM drift.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121357518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing cooperative and non-cooperative crash risk-assessment","authors":"S. Demmel, D. Gruyer, A. Rakotonirainy","doi":"10.1109/IVS.2013.6629598","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629598","url":null,"abstract":"Cooperative Systems provide, through the multiplication of information sources over the road, a lot of potential to improve the assessment of the road risk describing a particular driving situation. In this paper, we compare the performance of a cooperative risk assessment approach against a non-cooperative approach; we used an advanced simulation framework, allowing for accurate and detailed, close-to-reality simulations. Risk is estimated, in both cases, with combinations of indicators based on the TTC. For the noncooperative approach, vehicles are equipped only with an AAC-like forward-facing ranging sensor. On the other hand, for the cooperative approach, vehicles share information through 802.11p IVC and create an augmented map representing their environment; risk indicators are then extracted from this map. Our system shows that the cooperative risk assessment provides a systematic increase of forward warning to most of the vehicles involved in a freeway emergency braking scenario, compared to a non-cooperative system.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115067941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seen and missed traffic objects: A traffic object-specific awareness estimation","authors":"Tobias Bar, Denys Linke, D. Nienhuser, J. Zollner","doi":"10.1109/IVWORKSHOPS.2013.6615222","DOIUrl":"https://doi.org/10.1109/IVWORKSHOPS.2013.6615222","url":null,"abstract":"Handing-over vehicle control from a human driver to an intelligent vehicle and vice versa needs elaborate and safe hand-over strategies. Before passing control it must be ensured that the driver is aware of all objects which are important in a particular traffic situation. In this work a decision tree is used to learn which objects attract the driver's gaze in a particular situation. The decision tree classifies on object features as the object's type, velocity, size, color, and brightness. This information is fused from laser-scanners, front camera, and the vehicle's CAN-bus data. Whilst driving, an awareness confidence is built for each object perceived by the laser-scanners. Unexpected gaze behavior is detected by comparing the awareness confidence of each object to the expected gaze behavior, learned by means of the decision tree. Objects overlooked by the driver are further classified as critical or uncritical. This provides valuable information for following human-car interaction, augmented-reality, or safety applications.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115306357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusing laser point cloud and visual image at data level using a new reconstruction algorithm","authors":"Lipu Zhou","doi":"10.1109/IVS.2013.6629655","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629655","url":null,"abstract":"Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125930437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of distraction on driving performance using touch screen while driving on test track","authors":"T. Hagiwara, Ryo Sakakima, T. Hashimoto, T. Kawai","doi":"10.1109/IVS.2013.6629621","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629621","url":null,"abstract":"This study investigated the effect of distraction on driving performance for drivers using a touch screen. We evaluated the influence of three secondary tasks on the primary task varying the task duration and the screen position using 16 participants whose ages ranged from the twenties to the fifties. The primary task was car following. There were three secondary tasks: calling out numbers on the touch screen, calling out numbers and simultaneously to tap the same number on the touch screen, and tapping the four corners of the touch screen. Driving performance was evaluated in terms of speed, headway and lateral position. Based on the results of the study, when the drivers are operating the touch screen while driving, visual and manual distraction differently affected different driving performance measures. Specifically, visual distraction had a greater effect on longitudinal control measure, whereas combined visual/manual distraction affected longitudinal and lateral control measures.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bryan Clarke, Stewart Worrall, G. Brooker, E. Nebot
{"title":"Towards mapping of dynamic environments with FMCW radar","authors":"Bryan Clarke, Stewart Worrall, G. Brooker, E. Nebot","doi":"10.1109/IVS.2013.6629462","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629462","url":null,"abstract":"Frequency-modulated continuous waveform (FMCW) microwave and millimetre-wave radar is an attractive sensor for intelligent transport systems due to its reliable all-weather performance. This paper discusses issues involved in the design of FMCW radar mapping systems for use in collision avoidance in large vehicles operating in dynamic environments. The performance characteristics of radar are examined before an analysis is made of traditional grid-based and feature-based mapping approaches, both conceptually and in terms of implementation. The probability hypothesis density (PHD) filter is discussed as a potentially superior approach for radar mapping in dynamic environments.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122071542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Glaser, O. Orfila, L. Nouvelière, R. Potarusov, Sagar Akhegaonkar, F. Holzmann, Volker Scheuch
{"title":"Smart and Green ACC, adaptation of the ACC strategy for electric vehicle with regenerative capacity","authors":"S. Glaser, O. Orfila, L. Nouvelière, R. Potarusov, Sagar Akhegaonkar, F. Holzmann, Volker Scheuch","doi":"10.1109/IVS.2013.6629592","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629592","url":null,"abstract":"This paper presents an optimization of a conventional Adaptive Cruise Control system (ACC) for the specific use of electric vehicles with regenerative capacity, namely the Smart and Green ACC (SAGA). Longitudinal control strategies, that are developed for the driving assistances, mainly aim at optimizing the safety and the comfort of the vehicle occupants. Electric vehicles have the possibility, depending on the architecture, the speed and the braking demand, to regenerate a part of the electric energy during the braking. Moreover, the electric vehicle range is currently limited. The opportunity to adapt the braking of an ACC system to extend slightly the range must not be avoided. When the ACC is active, the vehicle speed is controlled automatically either to maintain a given clearance to a forward vehicle, or to maintain the driver desired speed, whichever is lower. We define how we can optimize both mode and what is the impact, in term of safety and strategy, including the knowledge of the future of the road, integrating a navigation system.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122095159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CPHD filter addressing occlusions with pedestrians and vehicles tracking","authors":"L. Lamard, R. Chapuis, Jean-Philippe Boyer","doi":"10.1109/IVS.2013.6629617","DOIUrl":"https://doi.org/10.1109/IVS.2013.6629617","url":null,"abstract":"In this paper, the problem of targets road tracking, like pedestrians and vehicles tracking is addressed. This paper proposes to improve a Cardinalized Probability Hypothesis Density (CPHD) filter in presence of occlusion using the sensor classification of each targets detected. Using this classification, a probability of target type is computed by Bayesian rules and used to deduce the width of targets. This width is necessary to take into account the occlusion problem in the Multi Target Tracking (MTT) filter. Besides, the probability of target type is also used to improve the performance of this MTT thanks to a new computation of the likelihood of measurements. Our system has been validated with real measurements from a smart camera in real traffic conditions.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128472747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}