{"title":"Lateral flow control of connected vehicles through deep reinforcement learning","authors":"Abdul Rahman Kreidieh, Y. Farid, K. Oguchi","doi":"10.1109/IV55152.2023.10186790","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186790","url":null,"abstract":"Coordinated lane-assignment strategies offer promising solutions for improving traffic conditions. By anticipating and re-positioning connected vehicles in response to potential downstream events, such systems can greatly improve the safety and efficiency of existing networks. Assigning said decisions, however, grows exponentially more complex as the scale of target networks expands. In this paper, we explore solutions to optimal lane assignment at the macroscopic level of traffic, whereby decisions are aggregated across multiple vehicles clustered spatially into sections. This approach reduces some of the challenges around scalability, but introduces dynamical interactions at the microscopic level that render higher-level decision-making complexities. To this point, we provide results demonstrating that reinforcement learning (RL) strategies are capable of generating responses that efficiently coordinate the lateral flow of vehicles across multiple road sections. In particular, we find that RL methods can robustly identify and maneuver vehicles around bottlenecks placed randomly within a given network, and in doing so substantively reduce the the traveling time for both human-driven and connected vehicles.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114353976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Centralised Vehicle Routing for Optimising Urban Traffic: A Scalability Perspective","authors":"L. Chrpa, M. Vallati","doi":"10.1109/IV55152.2023.10186707","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186707","url":null,"abstract":"In the light of revolutionary technologies such as connected autonomous vehicles, centralised vehicle (or traffic) routing is attracting a growing interest as an effective method to tackle traffic congestion in urban areas, which causes enormous economic losses. Whereas potential benefits of centralised vehicle routing techniques are huge, they are not yet mature enough to be deployed in (large) urban areas. The major issue preventing their deployment being the lack of scalability.This position paper provides an all encompassing discussion around how the scalability issue for centralised vehicle (traffic) routing approaches might be addressed. In particular, we elaborate on how the model of the environment (the road network and the traffic) can be reasonably abstracted to allow simplified yet meaningful reasoning. Then, we provide an overview of relevant classes of decision-making techniques and elaborate how they can be applied to tackle the problem. At the end, we present our perspective on how different types of decision-making techniques can be effectively combined such that they can deal with the scalability issue while maintaining reasonable quality of assigned routes.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114601480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Creß, Erik Schütz, B. L. Žagar, Alois Knoll
{"title":"Targetless Extrinsic Calibration Between Event-Based and RGB Camera for Intelligent Transportation Systems","authors":"Christian Creß, Erik Schütz, B. L. Žagar, Alois Knoll","doi":"10.1109/IV55152.2023.10186538","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186538","url":null,"abstract":"The perception of Intelligent Transportation Systems is mainly based on conventional cameras. Event-based cameras have a high potential to increase detection performance in such sensor systems. Therefore, an extrinsic calibration between these sensors is required. Since a target-based method with a checkerboard on the highway is impractical, a targetless approach is necessary. To the best of our knowledge, no working approach for targetless extrinsic calibration between event-based and conventional cameras in the domain of ITS exists. To fill this knowledge gap, we provide a targetless approach for extrinsic calibration. Our algorithm finds correspondences of the detected motion between both sensors using deep learning-based instance segmentation and sparse optical flow. Then, it calculates the transformation. We were able to verify the effectiveness of our method during experiments. Furthermore, we are comparable to existing multicamera calibration methods. Our approach can be used for targetless extrinsic calibration between event-based and conventional cameras.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alfredo Valle Barrio, Walter Morales-Alvarez, C. Olaverri-Monreal, José Eugenio Naranjo Hernandez
{"title":"Development and Validation of an Open Architecture for Autonomous Vehicle Control","authors":"Alfredo Valle Barrio, Walter Morales-Alvarez, C. Olaverri-Monreal, José Eugenio Naranjo Hernandez","doi":"10.1109/IV55152.2023.10186551","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186551","url":null,"abstract":"Teams dedicated to research in the field of autonomous vehicles can be found in many universities and research centers, but they often face the challenge of finding a suitable platform for their work. The primary reason for this challenge is the inaccessibility and high cost of commercial autonomous vehicles, leading researchers to rely on simulators.This paper introduces a new software architecture designed to automate vehicles, providing all the necessary capabilities of an autonomous vehicle in a more cost-effective and efficient manner. The architecture is designed to be modular, universal, and with a public interface, making it easy to modify, adapt to any type of vehicle, and accessible to any researcher.The new software architecture has been implemented in two platforms: a vehicle integrated with OpenPilot via ROS2 without any external hardware, and a last-mile robot. A validation test was conducted with volunteers to assess the reaction of passengers while the car was driving autonomously. The results of this implementation demonstrate the potential of this new software architecture to provide a comprehensive and accessible platform for the advancement of autonomous vehicle research.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122128184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rainer Trauth, Marc Kaufeld, Maximilian Geisslinger, Johannes Betz
{"title":"Learning and Adapting Behavior of Autonomous Vehicles through Inverse Reinforcement Learning","authors":"Rainer Trauth, Marc Kaufeld, Maximilian Geisslinger, Johannes Betz","doi":"10.1109/IV55152.2023.10186668","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186668","url":null,"abstract":"The driving behavior of autonomous vehicles has a significant impact on safety for all traffic participants. Unlike current traffic participants, autonomous vehicles in the future will also need to adhere to safety standards and defined risk properties in order to achieve a high level of public acceptance. At the same time, successful autonomous vehicles must be able to interact with human drivers in mixed traffic in a way that enables traffic to flow. In this paper, we present a hybrid approach to trajectory planning that learns and adapts human driving behavior using inverse reinforcement learning. The proposed approach performs a large-scale simulation with HighD real-world scenarios to learn human driving behavior and domain-specific traffic-flow characteristics. The analysis of the work focuses on the influence of risk-taking, which provides information about driving style safety. The results show insights into the risk behavior of trajectory planning approaches compared to human risk assessment. The comparison to human trajectories is intended to ensure comparability and accurate classification of risk-taking. We recommend a hybrid method for adapting driving behavior, in order to maintain the explainability and safety of the trajectory planning algorithm.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125799736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Establishing Systematic Classification Requirements for Automated Driving","authors":"Kent Mori, Trent Brown, Steven C. Peters","doi":"10.1109/IV55152.2023.10186542","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186542","url":null,"abstract":"Despite the presence of the classification task in many different benchmark datasets for perception in the automotive domain, few efforts have been undertaken to define consistent classification requirements. This work addresses the topic by proposing a structured method to generate a classification structure. First, legal categories are identified based on behavioral requirements for the vehicle. This structure is further substantiated by considering the two aspects of collision safety for objects as well as perceptual categories. A classification hierarchy is obtained by applying the method to an exemplary legal text. A comparison of the results with benchmark dataset categories shows limited agreement. This indicates the necessity for explicit consideration of legal requirements regarding perception.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125814946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Viewpoint Invariant 3D Driver Body Pose-Based Activity Recognition","authors":"Manuel Martin, D. Lerch, M. Voit","doi":"10.1109/IV55152.2023.10186682","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186682","url":null,"abstract":"Driver monitoring will be required in many countries for all new vehicles with automation functions. While the common approach for this task is face and eye gaze monitoring, cameras with a wider field of view have already been introduced in some cars. However, their mounting position can change between vehicle models. To minimize data collection efforts and to facilitate data reuse it is important for algorithms to be able to deal with a changing environment. We conduct an experiment comparing the performance of video-based models with 3D body pose-based activity recognition methods with regards to sensor and position changes. We introduce a modular activity recognition pipeline which uses a sensor independent representation including the 3D body pose of the driver, 3D dynamic object positions and 3D interior positions. We show that while video-based models offer the best quality overall, when trained and tested on the same camera view, body pose-based methods can be far more robust to positional changes. Moreover, augmentation reduces the performance drop across views to 35% compared to 83% without augmentation and 90% for video-based models.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129770078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingjie Niu, Ming Ding, Yuxiao Zhang, Maoning Ge, Hanting Yang, K. Takeda
{"title":"Open-world driving scene segmentation via multi-stage and multi-modality fusion of vision-language embedding","authors":"Yingjie Niu, Ming Ding, Yuxiao Zhang, Maoning Ge, Hanting Yang, K. Takeda","doi":"10.1109/IV55152.2023.10186652","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186652","url":null,"abstract":"In this study, a pixel-text level multi-stage multi-modality fusion segmentation method is proposed to make the open-world driving scene segmentation more efficient. It can be used for different semantic perceptual needs of autonomous driving scenarios for real-world driving situations. The method can finely segment unseen labels without additional corresponding semantic segmentation labels, only using the existing semantic segmentation data. The proposed method consists of 4 modules. A visual representation embedding module and a segmentation command embedding module are used to extract the driving scene and the segmentation category command. A multi-stage multi-modality fusion module is used to fuse the driving scene visual information and segmentation command text information for different sizes at the pixel-text level. Finally, a cascade segmentation head is used to ground the segmentation command text to the driving scene for encouraging the model to generate corresponding high-quality semantic segmentation results. In the experiment, we first verify the effectiveness of the method for zero-shot segmentation using a popular driving scene segmentation dataset. We also confirm the effectiveness of synonyms unseen label and hierarchy unseen label for the open-world semantic segmentation.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"82 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128154160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Cross-Domain Semi-Supervised Object Detection with Adversarial Domain Adaptation","authors":"Maximilian Menke, Thomas Wenzel, Andreas Schwung","doi":"10.1109/IV55152.2023.10186678","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186678","url":null,"abstract":"In autonomous driving, millions of frames with various scenarios for training deep object detectors is required. Labeling such a large number of frames is a costly process, therefore additional data sources support the training task. However, domain gaps from different cameras, weather, or locations typically limit the performance.We apply semi-supervised object detection, which leverages labeled source and pseudo-labeled target domain data in an iterative training paradigm. In addition, we newly include state-of-the-art adversarial style transfer into the semi-supervised training by stylizing images from source and target domains. This reduces the domain gap and improves pseudo-label quality in cross-domain semi-supervised training.In experiments and ablation studies, we show that our novel training framework can improve state-of-the-art detection performance by up to +10.1% on standard domain adaptation benchmarks.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128935816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karthikeyan Chandra Sekaran, Lakshman Balasubramanian, M. Botsch, W. Utschick
{"title":"Metric Learning Based Class Specific Experts for Open-Set Recognition of Traffic Participants in Urban Areas Using Infrastructure Sensors","authors":"Karthikeyan Chandra Sekaran, Lakshman Balasubramanian, M. Botsch, W. Utschick","doi":"10.1109/IV55152.2023.10186527","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186527","url":null,"abstract":"Sensors installed in the infrastructure can make a significant contribution to the advancement of Advanced Driver Assistance Systems (ADAS) and connected mobility. Thermal cameras provide protection against the abuse of personalised data and perform robustly in challenging environmental conditions, making them an excellent choice for infrastructural perception. The goal of this work is to solve the crucial problem of Open-Set Recognition (OSR) for thermal camera-based perception systems installed in the infrastructure. In this paper, a novel modular architecture for OSR called Class Specific Experts (CSE) is proposed, in which, class specialization is achieved using individual feature spaces. The proposed methodology can be easily embedded in an object detection setting and provides as a main advantage, the possibility of online incremental learning without catastrophic forgetting. This work also introduces a open-source classification dataset called Infrastructure Thermal Dataset (ITD) containing image snippets captured by a thermal camera mounted in the infrastructure. The proposed approach outperforms the compared baselines for the task of OSR on many publicly available thermal and non-thermal datasets, as well as the new ITD dataset.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129050672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}