2017 IEEE Intelligent Vehicles Symposium (IV)最新文献

筛选
英文 中文
Road grades and tire forces estimation using two-stage extended Kalman filter in a delayed interconnected cascade structure 在延迟互联级联结构中使用两阶段扩展卡尔曼滤波器估计道路等级和轮胎力
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995707
R. Cordeiro, A. M. Ribeiro, J. Azinheira, A. Victorino, Paulo A. V. Ferreira, E. Paiva, S. S. Bueno
{"title":"Road grades and tire forces estimation using two-stage extended Kalman filter in a delayed interconnected cascade structure","authors":"R. Cordeiro, A. M. Ribeiro, J. Azinheira, A. Victorino, Paulo A. V. Ferreira, E. Paiva, S. S. Bueno","doi":"10.1109/IVS.2017.7995707","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995707","url":null,"abstract":"Intelligent vehicles sense their dynamics and the environment to make proper decisions. Some of this information are hard to be measured or need expensive sensors. This paper addresses the estimation of road grade angles, along with tire-ground interaction forces, in a delayed interconnected cascade observer structure. A new approach using a Two-Stage Extended Kalman Filter is proposed, allowing a robust simultaneous estimation of the slow and fast dynamics variables. Experimental data is used to validate the estimator.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"192 1‐2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113966992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deep learning algorithm for autonomous driving using GoogLeNet
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995703
Mohammed Al-Qizwini, Iman Barjasteh, Hothaifa Al-Qassab, H. Radha
{"title":"Deep learning algorithm for autonomous driving using GoogLeNet","authors":"Mohammed Al-Qizwini, Iman Barjasteh, Hothaifa Al-Qassab, H. Radha","doi":"10.1109/IVS.2017.7995703","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995703","url":null,"abstract":"In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 169
Traffic signal timing optimization in connected vehicles environment 车联网环境下交通信号配时优化
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995896
Wan Li, X. Ban
{"title":"Traffic signal timing optimization in connected vehicles environment","authors":"Wan Li, X. Ban","doi":"10.1109/IVS.2017.7995896","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995896","url":null,"abstract":"We study the traffic signal control problem under the connected vehicle (CV) environment by assuming a fixed cycle length so that the proposed model can be extended readily for the coordination of multiple signals. The signal control problem is to minimize the weighted sum of total system fuel consumption and travel times. Due to the large dimension of the problem and the complexity of the nonlinear car-following model, we propose a Dynamic programming (DP) formulation by dividing the timing decisions into stages (one stage for a phase) and approximating the fuel consumption and travel time of a stage as functions of the state and decision variables of the stage. We also propose a two-step method, the end stage cost, and a branch and bound algorithm, to make sure that the obtained optimal solution can lead to the fixed cycle length. Numerical experiments are provided to test the performance of the proposed model using data generated by traffic simulation.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125087033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Mono-vision based moving object detection in complex traffic scenes 基于单视觉的复杂交通场景运动目标检测
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995857
V. Fremont, S. R. Florez, Bihao Wang
{"title":"Mono-vision based moving object detection in complex traffic scenes","authors":"V. Fremont, S. R. Florez, Bihao Wang","doi":"10.1109/IVS.2017.7995857","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995857","url":null,"abstract":"Vision-based dynamic objects motion segmentation can significantly help to understand the context around vehicles, and furthermore improve road traffic safety and autonomous navigation. Therefore, moving object detection in complex traffic scene becomes an inevitable issue for ADAS and autonomous vehicles. In this paper, we propose an approach that combines different multiple views geometry constraints to achieve moving objects detection using only a monocular camera. Self-assigned weights are estimated online moderating the contribution of each constraint. Such a combination enhances the detection performance in degenerated situations. According to the experimental results, the proposed approach provides accurate moving objects detections in dynamic traffic scenarios with large camera motions.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125634726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PCE-SLAM: A real-time simultaneous localization and mapping using LiDAR data PCE-SLAM:利用激光雷达数据进行实时同步定位和制图
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995960
Pragya Agrawal, Asif Iqbal, Brittney Russell, M. K. Hazrati, Vinay Kashyap, F. Akhbari
{"title":"PCE-SLAM: A real-time simultaneous localization and mapping using LiDAR data","authors":"Pragya Agrawal, Asif Iqbal, Brittney Russell, M. K. Hazrati, Vinay Kashyap, F. Akhbari","doi":"10.1109/IVS.2017.7995960","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995960","url":null,"abstract":"This paper describes a real-time design, implementation and validation of a LiDAR-based Simultaneous Localization and Mapping solution for intelligent vehicles. We propose a two-step Sweep-to-Sweep Motion Estimation and Sweep-to-Map Registration framework that compensates for the distortion of the point cloud, estimates the vehicle's motion and generates a 3D map of the world. The rotating motion of LiDAR and the longitudinal motion of the vehicle together create an inherent distortion in relative motions observed in each scan per sweep. Hence applying the same translation and rotation values to the entire sweep does not guarantee the optimum estimation for LiDAR's relative motion from sweep to sweep. Therefore, the key idea in this paper is to obtain accurate localization of the vehicle by processing LiDAR sweep in a batch-wise fashion followed by 3D Map Registration using Iterative Closest Point (ICP). To reduce drift in localization ICP utilizes local map information in a radius of about 100m from the position of vehicle. Our main contribution is to introduce an innovative CPU-only pipeline for simultaneous localization and mapping that runs real-time on Intel architecture. We have tested our algorithm by processing every sweep from Velodyne VLP-16 LiDAR at about 50ms on vehicle moving at speeds up-to 25mph in urban roads and parking lot structures. Our algorithm has been evaluated on KITTI datasets for city and suburban roads with an average relative position error of around 1%.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"293 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131301362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Generalizable intention prediction of human drivers at intersections 人类驾驶员在十字路口的广义意图预测
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995948
Derek J. Phillips, T. Wheeler, Mykel J. Kochenderfer
{"title":"Generalizable intention prediction of human drivers at intersections","authors":"Derek J. Phillips, T. Wheeler, Mykel J. Kochenderfer","doi":"10.1109/IVS.2017.7995948","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995948","url":null,"abstract":"Effective navigation of urban environments is a primary challenge remaining in the development of autonomous vehicles. Intersections come in many shapes and forms, making it difficult to find features and models that generalize across intersection types. New and traditional features are used to train several intersection intention models on real-world intersection data, and a new class of recurrent neural networks, Long Short Term Memory networks (LSTMs), are shown to outperform the state of the art. The models predict whether a driver will turn left, turn right, or continue straight up to 150 m with consistent accuracy before reaching the intersection. The results show promise for further use of LSTMs, with the mean cross validated prediction accuracy averaging over 85% for both three and four-way intersections, obtaining 83% for the highest throughput intersection.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134171318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 129
Deep reinforcement learning-based vehicle energy efficiency autonomous learning system 基于深度强化学习的车辆能效自主学习系统
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995880
Xuewei Qi, Yadan Luo, Guoyuan Wu, K. Boriboonsomsin, M. Barth
{"title":"Deep reinforcement learning-based vehicle energy efficiency autonomous learning system","authors":"Xuewei Qi, Yadan Luo, Guoyuan Wu, K. Boriboonsomsin, M. Barth","doi":"10.1109/IVS.2017.7995880","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995880","url":null,"abstract":"To mitigate air pollution problems and reduce greenhouse gas emissions (GHG), plug-in hybrid electric vehicles (PHEV) have been developed to achieve higher fuel efficiency. The Energy Management System (EMS) is a very important component of a PHEV in achieving better fuel economy and it is a very active research area. So far, most of the existing EMS strategies just simple follow predefined rules that are not adaptive to changing driving conditions; other strategies as starting to incorporate accurate prediction of future traffic conditions. In this study, a deep reinforcement learning based PHEV energy management system is designed to autonomously learn the optimal fuel use from its own historical driving record. It is a fully data-driven and learning-enabled model that does not rely on any prediction or predefined rules. The experiment results show that the proposed model is able to achieve 16.3% energy savings comparing to conventional binary control strategies.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129515897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Velocity and shape from tightly-coupled LiDAR and camera 从激光雷达和相机紧密耦合的速度和形状
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995699
Mohammad Hossein Daraei, Anh Vu, R. Manduchi
{"title":"Velocity and shape from tightly-coupled LiDAR and camera","authors":"Mohammad Hossein Daraei, Anh Vu, R. Manduchi","doi":"10.1109/IVS.2017.7995699","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995699","url":null,"abstract":"In this paper, we propose a multi-object tracking and reconstruction approach through measurement-level fusion of LiDAR and camera. The proposed method, regardless of object class, estimates 3D motion and structure for all rigid obstacles. Using an intermediate surface representation, measurements from both sensors are processed within a joint framework. We combine optical flow, surface reconstruction, and point-to-surface terms in a tightly-coupled non-linear energy function, which is minimized using Iterative Reweighted Least Squares (IRLS). We demonstrate the performance of our model on different datasets (KITTI with Velodyne HDL-64E and our collected data with 4-layer ScaLa Ibeo), and show an improvement in velocity error and crispness over state-of-the-art trackers.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132282128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Incremental Cross-Modality deep learning for pedestrian recognition 基于增量跨模态深度学习的行人识别
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995771
D. Pop, A. Rogozan, F. Nashashibi, A. Bensrhair
{"title":"Incremental Cross-Modality deep learning for pedestrian recognition","authors":"D. Pop, A. Rogozan, F. Nashashibi, A. Bensrhair","doi":"10.1109/IVS.2017.7995771","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995771","url":null,"abstract":"In spite of the large number of existing methods, pedestrian detection remains an open challenge. In recent years, deep learning classification methods combined with multi-modality images within different fusion schemes have achieved the best performance. It was proven that the late-fusion scheme outperforms both direct and intermediate integration of modalities for pedestrian recognition. Hence, in this paper, we focus on improving the late-fusion scheme for pedestrian classification on the Daimler stereo vision data set. Each image modality, Intensity, Depth and Flow, is classified by an independent Convolutional Neural Network (CNN), the outputs of which are then fused by a Multi-layer Perceptron (MLP) before the recognition decision. We propose different methods based on Cross-Modality deep learning of CNNs: (1) a correlated model where a unique CNN is trained with Intensity, Depth and Flow images for each frame, (2) an incremental model where a CNN is trained with the first modality images frames, then a second CNN, initialized by transfer learning on the first one is trained on the second modality images frames, and finally a third CNN initialized on the second one, is trained on the last modality images frames. The experiments show that the incremental cross-modality deep learning of CNNs improves classification performances not only for each independent modality classifier, but also for the multi-modality classifier based on late-fusion. Different learning algorithms are also investigated.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115548803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Automotive LIDAR objects detection and classification algorithm using the belief theory 基于信念理论的汽车激光雷达目标检测与分类算法
2017 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2017-06-11 DOI: 10.1109/IVS.2017.7995806
Valentin Magnier, D. Gruyer, J. Godelle
{"title":"Automotive LIDAR objects detection and classification algorithm using the belief theory","authors":"Valentin Magnier, D. Gruyer, J. Godelle","doi":"10.1109/IVS.2017.7995806","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995806","url":null,"abstract":"In Autonomous driving applications, the LIDAR is becoming one of the key sensors for the perception of the environment. Indeed its work principle which is based on distance ranging using a laser beam scanning the environment allows highly accurate measurements. Among sensors commonly used in autonomous driving applications, which are cameras, RADARs and LIDARs, the LIDAR is the most suited to estimate the shape of objects. However, for the moment, LIDARs dedicated to pure automotive application have only up to four measurement layers (4 laser beams scanning the environment at different height). Hence objects detection algorithm have to rely on very few layers to detected and classify the type of objects perceived on the road scene, that makes them specific. In this paper we will present an Detection and Tracking of Moving Objects (DATMO) algorithm featuring an object-type classification based on the belief theory. This algorithm is specific to automotive application therefore, the classification of perceived vehicles is between bike, car and truck. At the end of this paper we will present an application of this algorithm in real-world context.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121508464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信