{"title":"AOP-Net: All-in-One Perception Network for LiDAR-based Joint 3D Object Detection and Panoptic Segmentation","authors":"Yixuan Xu, H. Fazlali, Y. Ren, Bingbing Liu","doi":"10.1109/IV55152.2023.10186743","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186743","url":null,"abstract":"LiDAR-based 3D object detection and panoptic segmentation are two crucial tasks in the perception systems of autonomous vehicles and robots. In this paper, we propose All-in-One Perception Network (AOP-Net), a LiDAR-based multitask framework that combines 3D object detection and panoptic segmentation. In this method, a dual-task 3D backbone is developed to extract both panoptic- and detection-level features from the input LiDAR point cloud. Also, a new 2D backbone that intertwines Multi-Layer Perceptron (MLP) and convolution layers is designed to further improve the detection task performance. Finally, a novel module is proposed to guide the detection head by recovering useful features discarded during down-sampling operations in the 3D backbone. This module leverages estimated instance segmentation masks to recover detailed information from each candidate object. The AOP-Net achieves state-of-the-art performance for published works on the nuScenes benchmark for both 3D object detection and panoptic segmentation tasks. Also, experiments show that our method easily adapts to and significantly improves the performance of any BEV-based 3D object detection method.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123528723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiqian Dong, Qi Chen, Deyuan Qu, Hongsheng Lu, Akila Ganlath, Qing Yang, Sikai Chen, S. Labi
{"title":"LiDAR-based Cooperative Relative Localization","authors":"Jiqian Dong, Qi Chen, Deyuan Qu, Hongsheng Lu, Akila Ganlath, Qing Yang, Sikai Chen, S. Labi","doi":"10.1109/IV55152.2023.10186549","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186549","url":null,"abstract":"Vehicular cooperative perception aims to provide connected and automated vehicles (CAVs) with a longer and wider sensing range, making perception less susceptible to occlusions. However, this prospect is dimmed by the imperfection of onboard localization sensors such as Global Navigation Satellite Systems (GNSS), which can cause errors in aligning over-the-air perception data (from a remote vehicle) with a Host vehicle’s (HV’s) local observation. To mitigate this challenge, we propose a novel LiDAR-based relative localization framework based on the iterative closest point (ICP) algorithm. The framework seeks to estimate the correct transformation matrix between a pair of CAVs’ coordinate systems, through exchanging and matching a limited yet carefully chosen set of point clouds and usage of a coarse 2D map. From the deployment perspective, this means our framework only consumes conservative bandwidth in data transmission and can run efficiently with limited resources. Extensive evaluations on both synthetic dataset (COMAP) and KITTI-360 show that our proposed framework achieves state-of-the-art (SOTA) performance in cooperative localization. Therefore, it can be integrated with any upper-stream data fusion algorithm and serves as a preprocessor for high-quality cooperative perception.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123882918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-to-End Spatio-Temporal Attention-Based Lane-Change Intention Prediction from Multi-Perspective Cameras","authors":"Zhouqiao Zhao, Zhensong Wei, Danyang Tian, B. Reimer, Pnina Gershon, Ehsan Moradi-Pari","doi":"10.1109/IV55152.2023.10186602","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186602","url":null,"abstract":"Advanced Driver Assistance Systems (ADAS) with proactive alerts have been used to increase driving safety. Such systems’ performance greatly depends on how accurately and quickly the risky situations and maneuvers are detected. Existing ADAS provide warnings based on the vehicle’s operational status, detection of environments, and the drivers’ overt actions (e.g., using turn signals or steering wheels), which may not give drivers as much as optimal time to react. In this paper, we proposed a spatio-temporal attention-based neural network to predict drivers’ lane-change intention by fusing the videos from both in-cabin and forward perspectives. The Convolutional Neural Network (CNN)-Recursive Neural Network (RNN) network architecture was leveraged to extract both the spatial and temporal information. On top of this network backbone structure, the feature maps from different time steps and perspectives were fused using multi-head self-attention at each resolution of the CNN. The proposed model was trained and evaluated using a processed subset of the MIT Advanced Vehicle Technology (MIT-AVT) dataset which contains synchronized CAN data, 11058-second videos from 3 different views, 548 lane-change events, and 274 non-lane-change events performed by 83 drivers. The results demonstrate that the model achieves 87% F1-score within the 1-second validation window and 70% F1-score within the 5-second validation window with real-time performance.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cooperative Reinforcement Learning-based Damping of Lane-Change-Induced Waves","authors":"Kathy Jang, Y. Farid, K. Oguchi","doi":"10.1109/IV55152.2023.10186805","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186805","url":null,"abstract":"In this article, we demonstrate the first successful application of using reinforcement learning (RL) to develop policies for connected, automated vehicles (CAVs) to mitigate the effects of lane changing in traffic. We discuss how lane changing is a source of wave propagation and disturbance in certain kinds of traffic and propose a RL-based solution for wave damping. While receiving information from the environment and the ego vehicle (connected, non-automated) which is performing a lane change, we train an RL agent, operating as a CAV, to mitigate the waves caused by the lane change. The CAV has an advantage in being able to plan given the information of the vehicle executing the lane change, providing the CAV with anticipatory foresight as well as practical downstream information. At evaluation, the RL-based policy achieves up to a 5.3% improvement in velocity and a 15.9% improvement in throughput. It completely mitigates the formation of waves for certain inflow rates, and facilitates significant improvements for other inflow rates.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127389408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zejiang Wang, Adian Cook, Y. Shao, Guanhao Xu, J. Chen
{"title":"Cooperative Merging Speed Planning: A Vehicle-Dynamics-Free Method","authors":"Zejiang Wang, Adian Cook, Y. Shao, Guanhao Xu, J. Chen","doi":"10.1109/IV55152.2023.10186541","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186541","url":null,"abstract":"Various cooperative merging control strategies at on-ramp have been proposed in the last decade. Approximated vehicle longitudinal motion models, e.g., kinematics model, have been broadly adopted for controller synthesis because of their simplicity. However, what appears problematic is that the models used for controller validation remain, in many cases, the same as the ones used for controller design. Indeed, actual vehicle dynamics contain rich behaviors that the simplified models cannot fully cover. In this paper, we first demonstrate that the actual vehicle speed can be dissimilar to the reference from a speed planner once vehicle dynamics is considered. Then, we propose two data-driven speed generators agnostic to vehicle dynamics. SUMO/Simulink joint simulations demonstrate that the proposed reference speed planners can successfully merge vehicles with distinct dynamics characteristics by following the desired sequence, speed, and intervehicle distance at the merging point while avoiding collisions.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129591394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Schäfer, Hendrik Steidl, S. Kowalewski, Bassam Alrifaee
{"title":"Investigating a Pressure Sensitive Surface Layer for Vehicle Localization","authors":"Simon Schäfer, Hendrik Steidl, S. Kowalewski, Bassam Alrifaee","doi":"10.1109/IV55152.2023.10186582","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186582","url":null,"abstract":"Roads are one of the most important transportation routes in the world, yet the way we build roads has remained the same for decades. The road system’s structure is virtually unchanged, and there is little use beyond the primary function of load transfer. However, the road could be an essential data source for different applications. This paper presents an algorithm for detecting and tracking vehicles passing over the road surface using real-time load data. We detect individual tires based on local pressure maxima on the surface and track them using a multiple-target tracker. Our algorithm subsequently identifies individual vehicles performing pattern matching with the tracked wheels. We tested the algorithm in the Cyber-Physical Mobility Lab because there is yet to be a system for real-world testing, and cyber-physical labs are more flexible and less expensive than real-world testing. In our test run, we achieved a vehicle detection accuracy and recall of 100% and a localization accuracy of a few centimeters.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"20 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120976470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consensus-Based Fault-Tolerant Platooning for Connected and Autonomous Vehicles","authors":"Tzu-Yen Tseng, Ding Huang, Jia-You Lin, Po-Jui Chang, Chung-Wei Lin, Changliu Liu","doi":"10.1109/IV55152.2023.10186667","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186667","url":null,"abstract":"Platooning is a representative application of connected and autonomous vehicles. The information exchanged between connected functions and the precise control of autonomous functions provide great safety and traffic capacity. In this paper, we develop an advanced consensus-based approach for platooning. By applying consensus-based fault detection and adaptive gains to controllers, we can detect faulty position and speed information from vehicles and reinstate the normal behavior of the platooning. Experimental results demonstrate that the developed approach outperforms the state-of-the-art approaches and achieves small steady state errors and small settling times under scenarios with faults.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131947964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Changing the Perspective: Assessing the Controllability of Minimal Risk Maneuvers for Surrounding Traffic Participants","authors":"B. Karakaya, K. Bengler","doi":"10.1109/IV55152.2023.10186745","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186745","url":null,"abstract":"A mixed traffic consisting of manual and automated vehicles will be part of reality in the near future. Vehicles of higher automation levels, which will have the ability to perform minimal risk maneuvers, will also participate in this traffic. Previous research in this area has focused on the interaction within the automated vehicle and a change of perspective was therefore carried out in this study. Three different maneuvers were evaluated from two perspectives of the surrounding traffic with regard to their controllability. A new method was used to evaluate the driving performance individually. The results of the driving simulator study with 35 participants show that an MRM in the form of a lane change coupled with a standstill on the shoulder lane is the most controllable. The results are consistent from both investigated perspectives.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125626428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhijing Zhu, Robin Philipp, Yongqi Zhao, Constanze Hungar, Jürgen Pannek, Falk Howar
{"title":"Automatic Disengagement Scenario Reconstruction Based on Urban Test Drives of Automated Vehicles","authors":"Zhijing Zhu, Robin Philipp, Yongqi Zhao, Constanze Hungar, Jürgen Pannek, Falk Howar","doi":"10.1109/IV55152.2023.10186640","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186640","url":null,"abstract":"In recent years, scenario-based testing has gained increased attention as a potentially efficient strategy for validating the overall safety of automated vehicles. However, which scenarios are of interest for testing and how to systematically generate the test instances remain as unanswered questions. In this work, we interpret the importance of incorporating automated vehicle disengagement scenarios into scenario-based testing. Accordingly, we design and implement a fully automatic pipeline to reconstruct the essential and error-reduced disengagement scenarios based on imperfect perception measurement data from real test drives in an urban environment. Our concept is developed based on 137 disengagement data snippets and two additional datasets for handling false positives and false negatives in the original measurements. We use additional disengagement snippets for validating the performance of the pipeline. We exhibit representative reconstructed scenarios to show a successful restoration of the reality and quantitatively demonstrate the correct functioning of the methods in the pipeline regarding filtering irrelevant objects and handling the perception inaccuracies.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114478531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuan-Cheng Liu, Nikol Figalová, M. Baumann, K. Bengler
{"title":"Human-Machine Interface Evaluation Using EEG in Driving Simulator","authors":"Yuan-Cheng Liu, Nikol Figalová, M. Baumann, K. Bengler","doi":"10.1109/IV55152.2023.10186567","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186567","url":null,"abstract":"Automated vehicles are pictured as the future of transportation, and facilitating safer driving is only one of the many benefits. However, due to the constantly changing role of the human driver, users are easily confused and have little knowledge about their responsibilities. Being the bridge between automation and human, the human-machine interface (HMI) is of great importance to driving safety. This study was conducted in a static driving simulator. Three HMI designs were developed, among which significant differences in mental workload using NASA-TLX and the subjective transparency test were found. An electroencephalogram was applied throughout the study to determine if differences in the mental workload could also be found using EEG’s spectral power analysis. Results suggested that more studies are required to determine the effectiveness of the spectral power of EEG on mental workload, but the three interface designs developed in this study could serve as a solid basis for future research to evaluate the effectiveness of psychophysiological measures.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115894901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}