{"title":"How Can Automated Vehicles Explain Their Driving Decisions? Generating Clarifying Summaries Automatically","authors":"Franziska Henze, Dennis Fassbender, C. Stiller","doi":"10.1109/iv51971.2022.9827197","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827197","url":null,"abstract":"One way to increase user acceptance in automated vehicles is to explain their driving decisions, but current methods still involve human interpretations and are thus prone to errors. Therefore, the presented method formulates summaries that clarify the automated vehicle’s driving decision by extracting all necessary information automatically from the planning algorithm. This paper shows the generation of three exemplary statement types and their validation with an online survey that investigated users’ preferences. The results suggest that participants favor statements describing information that affect the driving decision as well as applicable traffic rules. Additionally, individual information needs should be considered when constructing modular explanations. Although this analysis does not consider sophisticated human machine interfaces nor real traffic scenarios, it does show, for the first time, how satisfying statements can be generated using a planning algorithm without any human-induced bias. This is an important step towards self-contained transparency of automated driving functions and can therefore lay the basis for future human machine interfaces.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133015255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melina Lutwitzi, D. Betschinske, T. Albrecht, H. Winner
{"title":"Lidar and Landmark based Localization System for a Wheeled Mobile Driving Simulator","authors":"Melina Lutwitzi, D. Betschinske, T. Albrecht, H. Winner","doi":"10.1109/iv51971.2022.9827085","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827085","url":null,"abstract":"The following work presents the development of a vehicle positioning function using vehicle mounted lidar sensors of the type Ouster OS1-32 and retroreflective landmarks. The function is developed for the use case of a wheeled mobile driving simulator, which is a mobile robot performing driving maneuvers within a virtually limited circular workspace. Nevertheless, the function is transferable to other applications where a vehicle’s dynamic position on a limitable area is to be determined with high dependability and independently of random environmental features. Based on the specific requirements for the simulator operation, a suitable architecture of landmarks, consisting of retroreflective cylinders, is derived. Then, the software architecture is presented, which mainly relies on a map matching algorithm. In comparison with a DGPS reference system and under artificial perturbation of the lidar-landmark-interaction, the performance and robustness of the function is evaluated on a real prototype. The results show high potential of the developed function for a safety relevant positioning of the vehicle.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133395045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting PatchMatch Multi-View Stereo for Urban 3D Reconstruction","authors":"M. Orsingher, P. Zani, P. Medici, M. Bertozzi","doi":"10.48550/arXiv.2207.08439","DOIUrl":"https://doi.org/10.48550/arXiv.2207.08439","url":null,"abstract":"In this paper, a complete pipeline for image-based 3D reconstruction of urban scenarios is proposed, based on PatchMatch Multi-View Stereo (MVS). Input images are firstly fed into an off-the-shelf visual SLAM system to extract camera poses and sparse keypoints, which are used to initialize PatchMatch optimization. Then, pixelwise depths and normals are iteratively computed in a multi-scale framework with a novel depth-normal consistency loss term and a global refinement algorithm to balance the inherently local nature of PatchMatch. Finally, a large-scale point cloud is generated by back-projecting multi-view consistent estimates in 3D. The proposed approach is carefully evaluated against both classical MVS algorithms and monocular depth networks on the KITTI dataset, showing state of the art performances.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121872356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Irvine, Peter Baker, Y. K. Mo, A. B. D. Costa, Xizhe Zhang, S. Khastgir, P. Jennings
{"title":"Vehicle-to-Everything (V2X) in Scenarios: Extending Scenario Description Language for Connected Vehicle Scenario Descriptions*","authors":"Patrick Irvine, Peter Baker, Y. K. Mo, A. B. D. Costa, Xizhe Zhang, S. Khastgir, P. Jennings","doi":"10.1109/iv51971.2022.9827272","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827272","url":null,"abstract":"The move towards connected and autonomous vehicles (CAVs) has gained a strong focus in recent years due to the many benefits they provide. While the autonomous aspect has seen substantial advancement in both the development and testing methodologies, the connected aspect has lagged behind, especially in the verification and validation (V&V) discussions. Integrating connectivity into the development and testing framework for CAVs is a necessity for ensuring the early deployment of cooperative driving systems. A key element within such a framework is a test scenario, which represents a set of scenery, environmental conditions, and dynamic conditions, that a system needs to be tested in. However, the connectivity element is not present in any of the current state of the art scenario description languages (SDLs) that are publicly available. This leaves a gap within the CAV development ecosystem. To accommodate for, and accelerate the development of, connected vehicle systems and their verification and validation methods, this paper proposes a novel V2X extension to the previously published two-level abstraction SDL. The extension enables communications between vehicles, infrastructures, and further additional entities to be specified as part of the scenario and be subsequently tested in virtual testing or real-world testing. Eight new V2X attributes have been added to the SDL. An example set of syntax and semantic definitions are presented in this paper targeting two different abstraction levels – level 1 aims at the abstract scenario level for non-technical end-users such as regulators, and level 2 aims at the logical and concrete scenario level for end-users such as simulation test engineers.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121760839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Srinara, S. Tsai, Cheng-Xian Lin, M. Tsai, K. Chiang
{"title":"Reliable Evaluation of Navigation States Estimation for Automated Driving Systems","authors":"S. Srinara, S. Tsai, Cheng-Xian Lin, M. Tsai, K. Chiang","doi":"10.1109/iv51971.2022.9827391","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827391","url":null,"abstract":"To achieve a higher level of automation for modern development in automated driving systems (ADS), reliable evaluation of navigation states estimation is crucial demand. Although the presence of several approaches on evaluation are presented, but no study has examined problems related to establish a trustable reference system for fully evaluating performance of ADS. This paper proposes new strategies for better handling with the ground truth system for full navigation evaluation with automated driving applications. The first strategy involves making use of the integration solutions of an inertial measurement unit (IMU) and global navigation satellite system (GNSS) as an initial pose for normal distribution transform (NDT) with high-definition (HD) point cloud map. An accurate LiDAR-based navigation estimation could be then achieved. In the second strategy, LiDAR-based position is used as the measurements to update with the loosely coupled (LC)INS/GNSS/LiDAR integration system. The preliminary results indicate that the proposed LC-INS/GNSS/LiDAR strategy not only estimates full navigation solutions, but also seems to provide more accurate and reliable for evaluating the positioning, navigation and timing (PNT) services compared to conventional methods.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124771254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A convolution-based grid map reconfiguration method for autonomous driving in highly constrained environments","authors":"Chaojie Zhang, Mengxuan Song, Jun Wang","doi":"10.1109/iv51971.2022.9827163","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827163","url":null,"abstract":"This paper proposes a convolution-based method for reconfiguring highly constrained environments, which considers the contour and heading of an autonomous vehicle. The vehicle with possible different heading angles is taken as the kernels. The multiple convolutions between the kernels and the environment are performed to generate a three-dimensional grid map, which significantly improves the computational efficiency of the collision detection algorithm. Moreover, a hierarchical and multistage trajectory planning method based on the reconfigured grid map is proposed. The superiority of the proposed method is verified by comparative simulations and real-time experiments.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130067801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Safe Control for Driving in Uncertain Environments","authors":"Siddharth Gangadhar, Zhuoyuan Wang, Haoming Jing, Yorie Nakahira","doi":"10.1109/iv51971.2022.9827264","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827264","url":null,"abstract":"This paper presents an adaptive safe control method that can adapt to changing environments, tolerate large uncertainties, and exploit predictions in autonomous driving. We first derive a sufficient condition to ensure long-term safe probability when there are uncertainties in system parameters. Then, we use the safety condition to formulate a stochastic adaptive safe control method. Finally, we test the proposed technique numerically in a few driving scenarios. The use of long-term safe probability provides a sufficient outlook time horizon to capture future predictions of the environment and planned vehicle maneuvers and to avoid unsafe regions of attractions. The resulting control action systematically mediates behaviors based on uncertainties and can find safer actions even with large uncertainties. This feature allows the system to quickly respond to changes and risks, even before an accurate estimate of the changed parameters can be constructed. The safe probability can be continuously learned and refined. Using more precise probability avoids over-conservatism, which is a common drawback of the deterministic worst-case approaches. The proposed techniques can also be efficiently computed in real-time using onboard hardware and modularly integrated into existing processes such as predictive model controllers.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130252810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Contrastive-Learning-Based Method for Alert-Scene Categorization","authors":"Shaochi Hu, Hanwei Fan, Biao Gao, Huijing Zhao","doi":"10.1109/iv51971.2022.9827387","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827387","url":null,"abstract":"Whether it’s a driver warning or an autonomous driving system, ADAS needs to decide when to alert the driver of danger or take over control. This research formulates the problem as an alert-scene categorization one and proposes a method using contrastive learning. Given a front-view video of a driving scene, a set of anchor points is marked by a human driver, where an anchor point indicates that the semantic attribute of the current scene is different from that of the previous one. The anchor frames are then used to generate contrastive image pairs to train a feature encoder and obtain a scene similarity measure, so as to expand the distance of the scenes of different categories in the feature space. Each scene category is explicitly modeled to capture the meta pattern on the distribution of scene similarity values, which is then used to infer scene categories. Experiments are conducted using front-view videos that were collected during driving at a cluttered dynamic campus. The scenes are categorized into no alert, longitudinal alert, and lateral alert. The results are studied at both feature encoding, category modeling, and reasoning aspects. By comparing precision with two full supervised end-to-end baseline models, the proposed method demonstrates competitive or superior performance. However, it remains still questions: how to generate ground truth data and how to evaluate performance in ambiguous situations, which leads to future works.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129601850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Manuel Gaspar Sánchez, Truls Nyberg, Christian Pek, Jana Tumova, Martin Törngren
{"title":"Foresee the Unseen: Sequential Reasoning about Hidden Obstacles for Safe Driving","authors":"José Manuel Gaspar Sánchez, Truls Nyberg, Christian Pek, Jana Tumova, Martin Törngren","doi":"10.1109/iv51971.2022.9827171","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827171","url":null,"abstract":"Safe driving requires autonomous vehicles to anticipate potential hidden traffic participants and other unseen objects, such as a cyclist hidden behind a large vehicle, or an object on the road hidden behind a building. Existing methods are usually unable to consider all possible shapes and orientations of such obstacles. They also typically do not reason about observations of hidden obstacles over time, leading to conservative anticipations. We overcome these limitations by (1) modeling possible hidden obstacles as a set of states of a point mass model and (2) sequential reasoning based on reachability analysis and previous observations. Based on (1), our method is safer, since we anticipate obstacles of arbitrary unknown shapes and orientations. In addition, (2) increases the available drivable space when planning trajectories for autonomous vehicles. In our experiments, we demonstrate that our method, at no expense of safety, gives rise to significant reductions in time to traverse various intersection scenarios from the CommonRoad Benchmark Suite.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129707019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-Based Reinforcement Learning for Advanced Adaptive Cruise Control: A Hybrid Car Following Policy","authors":"M. U. Yavas, T. Kumbasar, N. K. Ure","doi":"10.1109/iv51971.2022.9827279","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827279","url":null,"abstract":"Adaptive cruise control (ACC) is one of the frontier functionality for highly automated vehicles and has been widely studied by both academia and industry. However, previous ACC approaches are reactive and rely on precise information about the current state of a single lead vehicle. With the advancement in the field of artificial intelligence, particularly in reinforcement learning, there is a big opportunity to enhance the current functionality. This paper presents an advanced ACC concept with unique environment representation and model-based reinforcement learning (MBRL) technique which enables predictive driving. By being predictive, we refer to the capability to handle multiple lead vehicles and have internal predictions about the traffic environment which avoids reactive short-term policies. Moreover, we propose a hybrid policy that combines classical car following policies with MBRL policy to avoid accidents by monitoring the internal model of the MBRL policy. Our extensive evaluation in a realistic simulation environment shows that the proposed approach is superior to the reference model-based and model-free algorithms. The MBRL agent requires only 150k samples (approximately 50 hours driving) to converge, which is x4 more sample efficient than model-free methods.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117176785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}