Qianjing Sun, Yong Wang, Lingqiu Zeng, Qingwen Han, Qinglong Xie, L. Ye, Fukun Xie
{"title":"Research on optimization and evaluation method of the car following model based on SUMO application test scenario","authors":"Qianjing Sun, Yong Wang, Lingqiu Zeng, Qingwen Han, Qinglong Xie, L. Ye, Fukun Xie","doi":"10.1109/ivworkshops54471.2021.9669238","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669238","url":null,"abstract":"In terms of Vehicle to Everything (V2X) testing and evaluation, Hardware-in-the-loop (HIL) simulation has become an indispensable technology. In the research of HIL testing, it is necessary to use micro-traffic simulation software to build scenarios and simulate traffic objects to meet the testing requirements of complex traffic scenarios for the Internet of Vehicles (IOV). However, the performance of the micro-simulation model deadly influences simulation accuracy. Hence, in this paper, an improved micro-simulation model is constructed on basis of the Krauss model, and an application test scheme is designed. Simulation results show that the improved model solves the problem of acceleration changing abruptly, and improves the effectiveness and practicability of the V2X in-loop test.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132491742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Driving Behavior Aware Caption Generation for Egocentric Driving Videos Using In-Vehicle Sensors*","authors":"Hongkuan Zhang, Koichi Takeda, Ryohei Sasano, Yusuke Adachi, Kento Ohtani","doi":"10.1109/ivworkshops54471.2021.9669259","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669259","url":null,"abstract":"Video captioning aims to generate textual descriptions according to the video contents. The risk assessment of autonomous driving vehicles has become essential for an insurance company for providing adequate insurance coverage, in particular, for emerging MaaS business. The insurers need to assess the risk of autonomous driving business plans with a fixed route by analyzing a large number of driving data, including videos recorded by dash cameras and sensor signals. To make the process more efficient, generating captions for driving videos can provide insurers concise information to understand the video contents quickly. A natural problem with driving video captioning is, since the absence of egovehicles in these egocentric videos, descriptions of latent driving behaviors are difficult to be grounded in specific visual cues. To address this issue, we focus on generating driving video captions with accurate behavior descriptions, and propose to incorporate in-vehicle sensors which encapsulate the driving behavior information to assist the caption generation. We evaluate our method on the Japanese driving video captioning dataset called City Traffic, where the results demonstrate the effectiveness of in-vehicle sensors on improving the overall performance of generated captions, especially on generating more accurate descriptions for the driving behaviors.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EEG-based System Using Deep Learning and Attention Mechanism for Driver Drowsiness Detection","authors":"Miankuan Zhu, Haobo Li, Jiangfan Chen, Mitsuhiro Kamezaki, Zutao Zhang, Ze-xi Hua, S. Sugano","doi":"10.1109/ivworkshops54471.2021.9669234","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669234","url":null,"abstract":"The lack of sleep (typically <6 hours a night) or driving for a long time are the reasons of drowsiness driving and caused serious traffic accidents. With pandemic of the COVID-19, drivers are wearing masks to prevent infection from it, which makes visual-based drowsiness detection methods difficult. This paper presents an EEG-based driver drowsiness estimation method using deep learning and attention mechanism. First of all, an 8-channels EEG collection hat is used to acquire the EEG signals in the simulation scenario of drowsiness driving and normal driving. Then the EEG signals are pre-processed by using the linear filter and wavelet threshold denoising. Secondly, the neural network based on attention mechanism and deep residual network (ResNet) is trained to classify the EEG signals. Finally, an early warning module is designed to sound an alarm if the driver is judged as drowsy. The system was tested under simulated driving environment and the drowsiness detection accuracy of the test set was 93.35%. Drowsiness warning simulation also verified the effectiveness of proposed early warning module.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active Safety System for Semi-Autonomous Teleoperated Vehicles","authors":"Smit Saparia, Andreas Schimpe, L. Ferranti","doi":"10.1109/ivworkshops54471.2021.9669239","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669239","url":null,"abstract":"Autonomous cars can reduce road traffic accidents and provide a safer mode of transport. However, key technical challenges, such as safe navigation in complex urban environments, need to be addressed before deploying these vehicles on the market. Teleoperation can help smooth the transition from human operated to fully autonomous vehicles since it still has human in the loop providing the scope of fallback on driver. This paper presents an Active Safety System (ASS) approach for teleoperated driving. The proposed approach helps the operator ensure the safety of the vehicle in complex environments, that is, avoid collisions with static or dynamic obstacles. Our ASS relies on a model predictive control (MPC) formulation to control both the lateral and longitudinal dynamics of the vehicle. By exploiting the ability of the MPC framework to deal with constraints, our ASS restricts the controller’s authority to intervene for lateral correction of the human operator’s commands, avoiding counter-intuitive driving experience for the human operator. Further, we design a visual feedback to enhance the operator’s trust over the ASS. In addition, we propose an MPC’s prediction horizon data based novel predictive display to mitigate the effects of large latency in the teleoperation system. We tested the performance of the proposed approach on a high-fidelity vehicle simulator in the presence of dynamic obstacles and latency.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115077067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Influences on Drivers’ Understandings of Systems by Presenting Image Recognition Results","authors":"Bo Yang, K. Inoue, S. Kitazaki, Kimihiko Nakano","doi":"10.1109/ivworkshops54471.2021.9669225","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669225","url":null,"abstract":"It is essential to help drivers have appropriate understandings of level 2 automated driving systems for keeping driving safety. A human machine interface (HMI) was proposed to present real time results of image recognition by the automated driving systems to drivers. It was expected that drivers could better understand the capabilities of the systems by observing the proposed HMI. Driving simulator experiments with 18 participants were preformed to evaluate the effectiveness of the proposed system. Experimental results indicated that the proposed HMI could effectively inform drivers of potential risks continuously and help drivers better understand the level 2 automated driving systems.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129969117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tarlan Suleymanov, Matthew Gadd, D. Martini, P. Newman
{"title":"The Oxford Road Boundaries Dataset","authors":"Tarlan Suleymanov, Matthew Gadd, D. Martini, P. Newman","doi":"10.1109/ivworkshops54471.2021.9669250","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669250","url":null,"abstract":"In this paper we present The Oxford Road Boundaries Dataset, designed for training and testing machine-learning-based road-boundary detection and inference approaches. We have hand-annotated two of the 10 km-long forays from the Oxford Robotcar Dataset and generated from other forays several thousand further examples with semi-annotated road-boundary masks. To boost the number of training samples in this way, we used a vision-based localiser to project labels from the annotated datasets to other traversals at different times and weather conditions. As a result, we release 62 605 labelled samples, of which 47 639 samples are curated. Each of these samples contain both raw and classified masks for left and right lenses. Our data contains images from a diverse set of scenarios such as straight roads, parked cars, junctions, etc. Files for download and tools for manipulating the labelled data are available at: oxford-robotics-institute.github.io/road-boundaries-dataset","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122002906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walter Morales-Alvarez, M. Marouf, H. Tadjine, C. Olaverri-Monreal
{"title":"Real-World Evaluation of the Impact of Automated Driving System Technology on Driver Gaze Behavior, Reaction Time and Trust","authors":"Walter Morales-Alvarez, M. Marouf, H. Tadjine, C. Olaverri-Monreal","doi":"10.1109/ivworkshops54471.2021.9669230","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669230","url":null,"abstract":"Recent developments in advanced driving assistance systems (ADAS) that rely on some level of autonomy have led the automobile industry and research community to investigate the impact they might have on driving performance. However, most of the research performed so far is based on simulated environments. In this study we investigated the behavior of drivers in a vehicle with automated driving system (ADS) capabilities in a real life driving scenario. We analyzed their response to a take over request (TOR) at two different driving speeds while being engaged in non-driving-related tasks (NDRT). Results from the performed experiments showed that driver reaction time to a TOR, gaze behavior and self-reported trust in automation were affected by the type of NDRT being concurrently performed and driver reaction time and gaze behavior additionally depended on the driving or vehicle speed at the time of TOR.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130775437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Rosenzweig, E. Brito, H. Kobialka, M. Akila, Nico M. Schmidt, Peter Schlicht, Jan David Schneider, Fabian Hüger, M. Rottmann, Sebastian Houben, Tim Wirtz
{"title":"Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis","authors":"Julia Rosenzweig, E. Brito, H. Kobialka, M. Akila, Nico M. Schmidt, Peter Schlicht, Jan David Schneider, Fabian Hüger, M. Rottmann, Sebastian Houben, Tim Wirtz","doi":"10.1109/ivworkshops54471.2021.9669248","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669248","url":null,"abstract":"Many machine learning applications can benefit from simulated data for systematic validation - in particular if real-life data is difficult to obtain or annotate. However, since simulations are prone to domain shift w.r.t. real-life data, it is crucial to verify the transferability of the obtained results.We propose a novel framework consisting of a generative label-to-image synthesis model together with different transferability measures to inspect to what extent we can transfer testing results of semantic segmentation models from synthetic data to equivalent real-life data. With slight modifications, our approach is extendable to, e.g., general multi-class classification tasks. Grounded on the transferability analysis, our approach additionally allows for extensive testing by incorporating controlled simulations. We validate our approach empirically on a semantic segmentation task on driving scenes. Transferability is tested using correlation analysis of IoU and a learned discriminator. Although the latter can distinguish between real-life and synthetic tests, in the former we observe surprisingly strong correlations of 0.7 for both cars and pedestrians.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131530960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Larissa T. Triess, M. Dreissig, Christoph B. Rist, J. M. Zöllner
{"title":"A Survey on Deep Domain Adaptation for LiDAR Perception","authors":"Larissa T. Triess, M. Dreissig, Christoph B. Rist, J. M. Zöllner","doi":"10.1109/IVWorkshops54471.2021.9669228","DOIUrl":"https://doi.org/10.1109/IVWorkshops54471.2021.9669228","url":null,"abstract":"Scalable systems for automated driving have to reliably cope with an open-world setting. This means, the perception systems are exposed to drastic domain shifts, like changes in weather conditions, time-dependent aspects, or geographic regions. Covering all domains with annotated data is impossible because of the endless variations of domains and the time-consuming and expensive annotation process. Furthermore, fast development cycles of the system additionally introduce hardware changes, such as sensor types and vehicle setups, and the required knowledge transfer from simulation.To enable scalable automated driving, it is therefore crucial to address these domain shifts in a robust and efficient manner. Over the last years, a vast amount of different domain adaptation techniques evolved. There already exists a number of survey papers for domain adaptation on camera images, however, a survey for LiDAR perception is absent. Nevertheless, LiDAR is a vital sensor for automated driving that provides detailed 3D scans of the vehicle’s surroundings. To stimulate future research, this paper presents a comprehensive review of recent progress in domain adaptation methods and formulates interesting research questions specifically targeted towards LiDAR perception.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134050618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online and Adaptive Parking Availability Mapping: An Uncertainty-Aware Active Sensing Approach for Connected Vehicles","authors":"Luca Varotto, A. Cenedese","doi":"10.1109/ivworkshops54471.2021.9669241","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669241","url":null,"abstract":"Research on connected vehicles represents a continuously evolving technological domain, fostered by the emerging Internet of Things paradigm and the recent advances in intelligent transportation systems. In the context of assisted driving, connected vehicle technology provides real-time information about the surrounding traffic conditions. In this regard, we propose an online and adaptive scheme for parking availability mapping. Specifically, we adopt an information-seeking active sensing approach to select the incoming data, thus preserving the onboard storage and processing resources; then, we estimate the parking availability through Gaussian Process Regression. We compare the proposed algorithm with several baselines, which attain lower performance in terms of mapping convergence speed and adaptation capabilities.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134220647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}