{"title":"Vehicle positioning with the integration of scene understanding and 3D map in urban environment","authors":"Jiali Bao, Yanlei Gu, S. Kamijo","doi":"10.1109/IVS.2017.7995700","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995700","url":null,"abstract":"Accurate self-localization is a critical problem in autonomous driving system. An autonomous vehicle requires sub-meter level positioning to make motion planning. However, in urban scenarios, the common Global Navigation Satellite System (GNSS) localization suffers from various difficulties as multipath and Non-Line-Of-Sight (NLOS). The Stereo visual odometry proves to be capable of localizing the vehicle relatively by tracking the ego motion of vehicle from stereo image pairs, but with cumulative error. 3D Map is an effective tool to reduce the cumulative positioning error. In this paper, we propose to realize scene understanding from stereo camera, and further utilize city mode map including 3D building and 2D road information to improve the visual odometry. In our proposal, stereo camera is applied to generate visual odometry and reconstruct the building scene. The accumulated building scenes form local building map. We integrate the local building map and Normal Distribution Transform (NDT) map generated from 3D building map in particle filter. Lane detection result helps to rectify the inner lane positioning error and keep lane with the aid of 2D road map. We conducted a series of experiments in Hitotsubashi area of Tokyo city where locates a lot of tall buildings. The result of experiments indicates that the accumulated error of visual odometry can be corrected by the proposed method and sub-meter accuracy localization is achieved.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"103 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verifying the safety of lane change maneuvers of self-driving vehicles based on formalized traffic rules","authors":"Christian Pek, P. Zahn, M. Althoff","doi":"10.1109/IVS.2017.7995918","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995918","url":null,"abstract":"Validating the safety of self-driving vehicles requires an enormous amount of testing. By applying formal verification methods, we can prove the correctness of the vehicles' behavior, which at the same time reduces remaining risks and the need for extensive testing. However, current safety approaches do not consider liabilities of traffic participants if a collision occurs. Utilizing formalized traffic rules to verify motion plans allows this problem to be solved. We present a novel approach for verifying the safety of lane change maneuvers, using formalized traffic rules according to the Vienna Convention on Road Traffic. This allows us to provide additional guarantees that if a collision occurs, the self-driving vehicle is not responsible. Furthermore, we consider misbehavior of other traffic participants during lane changes and propose feasible solutions to avoid or mitigate a potential collision. The approach has been evaluated using real traffic data provided by the NGSIM project as well as simulated lane changes.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129840124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A computational framework for driver's visual attention using a fully convolutional architecture","authors":"Ashish Tawari, Byeongkeun Kang","doi":"10.1109/IVS.2017.7995828","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995828","url":null,"abstract":"It is a challenging and important task to perceive and interact with other traffic participants in a complex driving environment. The human vision system plays one of the crucial roles to achieve this task. Particularly, visual attention mechanisms allow a human driver to cleverly attend to the salient and relevant regions of the scene to further make necessary decisions for the safe driving. Thus, it is significant to investigate human vision systems with great potential to improve assistive, and even autonomous, vehicular technologies. In this paper, we investigate driver's gaze behavior to understand visual attention. We, first, present a Bayesian framework to model visual attention of a human driver. Further, based on the framework, we develop a fully convolutional neural network to estimate the salient region in a novel driving scene. We systematically evaluate the proposed method using on-road driving data and compare it with other state-of-the-art saliency estimation approaches. Our analyses show promising results.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133259343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constantin Hubmann, Marvin Becker, Daniel Althoff, David Lenz, C. Stiller
{"title":"Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles","authors":"Constantin Hubmann, Marvin Becker, Daniel Althoff, David Lenz, C. Stiller","doi":"10.1109/IVS.2017.7995949","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995949","url":null,"abstract":"Autonomous driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intention of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surroundings allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation allows a low-dimensional state-space so that the problem can be solved online for varying road layouts and number of other vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. We show the results with simulations for the crossing of complex (unsignalized) intersections. Our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114732161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Speed profile planning in dynamic environments via temporal optimization","authors":"Changliu Liu, W. Zhan, M. Tomizuka","doi":"10.1109/IVS.2017.7995713","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995713","url":null,"abstract":"To generate safe and efficient trajectories for an automated vehicle in dynamic environments, a layered approach is usually considered, which separates path planning and speed profile planning. This paper is focused on speed profile planning for a given path that is represented by a set of waypoints. The speed profile will be generated using temporal optimization which optimizes the time stamps for all waypoints along the given path. The formulation of the problem under urban driving scenarios is discussed. To speed up the computation, the non-convex temporal optimization is approximated by a set of quadratic programs which are solved iteratively using the slack convex feasible set (SCFS) algorithm. The simulations in various urban driving scenarios validate the effectiveness of the method.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134472642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charlott Vallon, Ziya Ercan, Ashwin Carvalho, F. Borrelli
{"title":"A machine learning approach for personalized autonomous lane change initiation and control","authors":"Charlott Vallon, Ziya Ercan, Ashwin Carvalho, F. Borrelli","doi":"10.1109/IVS.2017.7995936","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995936","url":null,"abstract":"We study an algorithm that allows a vehicle to autonomously change lanes in a safe but personalized fashion without the driver's explicit initiation (e.g. activating the turn signals). Lane change initiation in autonomous driving is typically based on subjective rules, functions of the positions and relative velocities of surrounding vehicles. This approach is often arbitrary, and not easily adapted to the driving style preferences of an individual driver. Here we propose a data-driven modeling approach to capture the lane change decision behavior of human drivers. We collect data with a test vehicle in typical lane change situations and train classifiers to predict the instant of lane change initiation with respect to the preferences of a particular driver. We integrate this decision logic into a model predictive control (MPC) framework to create a more personalized autonomous lane change experience that satisfies safety and comfort constraints. We show the ability of the decision logic to reproduce and differentiate between two lane changing styles, and demonstrate the safety and effectiveness of the control framework through simulations.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133186208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. John, S. Mita, Hossein Tehrani Niknejad, Kazuhisa Ishimaru
{"title":"Automated driving by monocular camera using deep mixture of experts","authors":"V. John, S. Mita, Hossein Tehrani Niknejad, Kazuhisa Ishimaru","doi":"10.1109/IVS.2017.7995709","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995709","url":null,"abstract":"In this paper, we propose a real-time vision-based filtering algorithm for steering angle estimation in autonomous driving. A novel scene-based particle filtering algorithm is used to estimate and track the steering angle using images obtained from a monocular camera. Highly accurate proposal distributions and likelihood are modeled for the second order particle filter, at the scene-level, using deep learning. For every road scene, an individual proposal distribution and likelihood model is learnt for the corresponding particle filter. The proposal distribution is modeled using a novel long short term memory network-mixture-of-expert-based regression framework. To facilitate the learning of highly accurate proposal distributions, each road scene is partitioned into straight driving, left turning and right turning sub-partitions. Subsequently, each expert in the regression framework accurately model the expert driver's behavior within a specific partition of the given road scene. Owing to the accuracy of the modelled proposal distributions, the steering angle is robustly tracked, even with a limited number of sampled particles. The sampled particles are assigned importance weights using a deep learning-based likelihood. The likelihood is modeled with a convolutional neural network and extra trees-based regression framework, which predicts the steering angle for a given image. We validate our proposed algorithm using multiple sequences. We perform a detailed parameter analysis and a comparative analysis of our proposed algorithm with different baseline algorithms. Experimental results show that the proposed algorithm can robustly track the steering angles with few particles in real-time even for challenging scenes.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125120661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaemyung Lee, Sihaeng Lee, Youngdong Kim, Janghyeon Lee, Junmo Kim
{"title":"Refine pedestrian detections by referring to features in different ways","authors":"Jaemyung Lee, Sihaeng Lee, Youngdong Kim, Janghyeon Lee, Junmo Kim","doi":"10.1109/IVS.2017.7995754","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995754","url":null,"abstract":"The performance of object detection has been improved as the success of deep architectures. The main algorithm predominantly used for general detection is Faster R-CNN because of their high accuracy and fast inference time. In pedestrian detection, Region Proposal Network (RPN) itself which is used for region proposals in Faster R-CNN can be used as a pedestrian detector. Also, RPN even shows better performance than Faster R-CNN for pedestrian detection. However, RPN generates severe false positives such as high score backgrounds and double detections because it does not have downstream classifier. From this observations, we made a network to refine results generated from the RPN. Our Refinement Network refers to the feature maps of the RPN and trains the network to rescore severe false positives. Also, we found that different type of feature referencing method is crucial for improving performance. Our network showed better accuracy than RPN with almost same speed on Caltech Pedestrian Detection benchmark.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130405317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intervention minimized semi-autonomous control using decoupled model predictive control","authors":"Hayoung Kim, Jeongmin Cho, Dongchan Kim, K. Huh","doi":"10.1109/IVS.2017.7995787","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995787","url":null,"abstract":"This paper proposes semi-autonomous control that minimizes intervention considering driver's steering and braking intentions. The biggest challenge of this problem is how to fairly judge driver's intentions that appear differently in the lateral and longitudinal directions and how to minimize controller intervention. A decoupled model predictive control (MPC) and optimal intervention decision methods are proposed considering driver incompatibility. Several MPCs are designed first considering the fact that the driver can avoid obstacles either by braking or moving to the left or right lanes. The control input to avoid the collision is calculated for each MPC such that its intervention can be minimized reflecting the driver's intention. After driver incompatibility is formalized, the optimal input is selected to minimize the incompatibility among the paths that can avoid accidents. The proposed algorithm is validated in simulations where collision can be avoided while minimizing the intervention.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129746055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Gao, Shouyan Guo, Kai-Wei Huang, Jiaxin Chen, Q. Gong, Yang Zou, Tongyao Bai, G. Overett
{"title":"Scale optimization for full-image-CNN vehicle detection","authors":"Yang Gao, Shouyan Guo, Kai-Wei Huang, Jiaxin Chen, Q. Gong, Yang Zou, Tongyao Bai, G. Overett","doi":"10.1109/IVS.2017.7995812","DOIUrl":"https://doi.org/10.1109/IVS.2017.7995812","url":null,"abstract":"Many state-of-the-art general object detection methods make use of shared full-image convolutional features (as in Faster R-CNN). This achieves a reasonable test-phase computation time while enjoys the discriminative power provided by large Convolutional Neural Network (CNN) models. Such designs excel on benchmarks1 which contain natural images but which have very unnatural distributions, i.e. they have an unnaturally high-frequency of the target classes and a bias towards a “friendly” or “dominant” object scale. In this paper we present further study of the use and adaptation of the Faster R-CNN object detection method for datasets presenting natural scale distribution and unbiased real-world object frequency. In particular, we show that better alignment of the detector scale sensitivity to the extant distribution improves vehicle detection performance. We do this by modifying both the selection of Region Proposals, and through using more scale-appropriate full-image convolution features within the CNN model. By selecting better scales in the region proposal input and by combining feature maps through careful design of the convolutional neural network, we improve performance on smaller objects. We significantly increase detection AP for the KITTI dataset car class from 76.3% on our baseline Faster R-CNN detector to 83.6% in our improved detector.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122660204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}