2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)最新文献

筛选
英文 中文
Unsupervised Joint Multi-Task Learning of Vision Geometry Tasks 视觉几何任务的无监督联合多任务学习
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669211
P. Jha, D. Tsanev, L. Lukic
{"title":"Unsupervised Joint Multi-Task Learning of Vision Geometry Tasks","authors":"P. Jha, D. Tsanev, L. Lukic","doi":"10.1109/ivworkshops54471.2021.9669211","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669211","url":null,"abstract":"In this paper, we present a novel architecture and training methodology for learning monocular depth prediction, camera pose estimation, optical flow, and moving object segmentation using a common encoder in an unsupervised fashion. We demonstrate that the geometrical relationships between these tasks not only support joint unsupervised learning as shown in previous works but also allow them to share common features. We also show the advantage of using a two-stage learning approach to improve the performance of the base network.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115099650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Kalman Filter Based Extended Object Tracking with a Gaussian Mixture Spatial Distribution Model 基于卡尔曼滤波的高斯混合空间分布模型扩展目标跟踪
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669221
Kolja Thormann, Shishan Yang, M. Baum
{"title":"Kalman Filter Based Extended Object Tracking with a Gaussian Mixture Spatial Distribution Model","authors":"Kolja Thormann, Shishan Yang, M. Baum","doi":"10.1109/ivworkshops54471.2021.9669221","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669221","url":null,"abstract":"Extended object tracking methods are often based on the assumption that the measurements are uniformly distributed on the target object. However, this assumption is often invalid for applications using automotive radar or lidar data. Instead, there is a bias towards the side of the object that is visible to the sensor. To handle this challenge, we employ a Gaussian Mixture (GM) density to model a more detailed measurement distribution across the surface and extend a recent Kalman filter based elliptic object tracker called MEM-EKF* to get a closed-form solution for the measurement update. An evaluation of the proposed approach compared with classic elliptic trackers and a recent truncation-based approach is conducted on simulated data.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129565119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain Based Vehicle Authentication Scheme for Vehicular Ad-hoc Networks 基于区块链的车辆自组织网络认证方案
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669235
Steffie Maria Stephen, A. Jaekel
{"title":"Blockchain Based Vehicle Authentication Scheme for Vehicular Ad-hoc Networks","authors":"Steffie Maria Stephen, A. Jaekel","doi":"10.1109/ivworkshops54471.2021.9669235","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669235","url":null,"abstract":"Vehicular Ad Hoc Network (VANET) is a pervasive network, where vehicles communicate with nearby vehicles and infrastructure nodes, such as Road-side unit (RSU). Information sharing among vehicles is an essential component of an intelligent transportation system (ITS), but security and privacy concerns must be taken into consideration. Security of the network can be improved by granting access only to authenticated vehicles and restricting or revoking access for vehicles involved in misbehavior. In this paper, we present a novel blockchain based approach to authenticate vehicles and notify other vehicles about any unauthorized messages in real time. This helps protect other vehicles in the network from making critical decisions based on false or inaccurate information. In the proposed architecture, vehicles communicate with each other using pseudonyms or pseudo IDs and the Blockchain is used to securely maintain the real identity of all vehicles, which can be linked to the pseudo IDs if needed. The goal is to protect privacy or individual vehicles, while still ensuring accountability in case of misbehavior. The performance of the proposed approach is evaluated for different vehicle and attacker densities, and results demonstrate it has lower authentication delay and communication overhead compared to existing approaches.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130295988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ADS-B Attack Classification using Machine Learning Techniques 基于机器学习技术的ADS-B攻击分类
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669212
Thabet Kacem, Aydin Kaya, A. Keçeli, C. Catal, D. Wijesekera, P. Costa
{"title":"ADS-B Attack Classification using Machine Learning Techniques","authors":"Thabet Kacem, Aydin Kaya, A. Keçeli, C. Catal, D. Wijesekera, P. Costa","doi":"10.1109/ivworkshops54471.2021.9669212","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669212","url":null,"abstract":"Automatic Dependent Surveillance Broadcast (ADS-B) is one of the most prominent protocols in Air Traffic Control (ATC). Its key advantages derive from using GPS as a location provider, resulting in better location accuracy while offering substantially lower deployment and operational costs when compared to traditional radar technologies. ADS-B not only can enhance radar coverage but also is a standalone solution to areas without radar coverage. Despite these advantages, a wider adoption of the technology is limited due to security vulnerabilities, which are rooted in the protocol’s open broadcast of clear-text messages. In spite of the seriousness of such concerns, very few researchers attempted to propose viable approaches to address such vulnerabilities. In addition to the importance of detecting ADS-B attacks, classifying these attacks is as important since it will enable the security experts and ATC controllers to better understand the attack vector thus enhancing the future protection mechanisms. Unfortunately, there have been very little research on automatically classifying ADS-B attacks. Even the few approaches that attempted to do so considered just two classification categories, i.e. malicious message vs not malicious message. In this paper, we propose a new module to our ADS-Bsec framework capable of classifying ADS-B attacks using advanced machine learning techniques including Support Vector Machines (SVM), Decision Tree, and Random Forest (RF). Our module has the advantage that it adopts a multi-class classification approach based on the nature of the ADS-B attacks not just the traditional 2category classifiers. To illustrate and evaluate our ideas, we designed several experiments using a flight dataset from Lisbon to Paris that includes ADS-B attacks from three categories. Our experimental results demonstrated that machine learningbased models provide high performance in terms of accuracy, sensitivity, and specificity metrics.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128370543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Precise self-localization for last mile delivery automated driving in unstructured environments 在非结构化环境中实现最后一英里自动驾驶的精确自我定位
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669233
Paul Czerwionka, Fabian Pucks, Hans Harte, R. Blaschek, Robert Treiber, Ahmed Hussein
{"title":"Precise self-localization for last mile delivery automated driving in unstructured environments","authors":"Paul Czerwionka, Fabian Pucks, Hans Harte, R. Blaschek, Robert Treiber, Ahmed Hussein","doi":"10.1109/ivworkshops54471.2021.9669233","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669233","url":null,"abstract":"In the research on last mile automated driving, self-localization is an important problem to solve. In this paper, a precise self-localization algorithm is presented, which is based on a given map using LiDAR and camera sensors. The proposed approach is used as a solution for the localization problem within the VanAssist project. Several experiments were carriedout in order to validate the work and compare it to a reference and accurate RTK-GPS data. The evaluation shows that the localization result is within the requirements for last mile automated driving. Moreover, it indicates that the solution is robust to handle limitation in comparison to other approaches in the literature.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123556747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eagleye: A Lane-Level Localization Using Low-Cost GNSS/IMU Eagleye:基于低成本GNSS/IMU的车道级定位
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669209
Aoki Takanose, Yuki Kitsukawa, Junichi Megruo, E. Takeuchi, Alexander Carballo, K. Takeda
{"title":"Eagleye: A Lane-Level Localization Using Low-Cost GNSS/IMU","authors":"Aoki Takanose, Yuki Kitsukawa, Junichi Megruo, E. Takeuchi, Alexander Carballo, K. Takeda","doi":"10.1109/ivworkshops54471.2021.9669209","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669209","url":null,"abstract":"In this paper, we propose Eagleye, an open-source software, that performs lane level localization in an urban environment. A low-cost GNSS receiver, IMU, and velocity sensor are used for position estimation. The feature of this method is that it is optimized to take full advantage of the averaging effect using time series data longer than a few tens of seconds. This optimization improves the estimation performance by reducing the GNSS multipath in urban areas. In order to verify the effectiveness of the system, we conducted accuracy evaluation of the proposed method and performance comparison tests with expensive position estimation systems. As a result of the test, we confirmed that the proposed method can estimate the relative position results with an accuracy of 0.5 m per 100m and the absolute position performance with an accuracy of 1.5 m. In addition, it was confirmed that the performance of the proposed method was equivalent to that of an expensive system. Therefore, it is considered that the proposed method can effectively estimate the location even in an urban environment.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129489690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pedestrian Trajectory Prediction via Spatial Interaction Transformer Network 基于空间交互变压器网络的行人轨迹预测
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669249
Tong Su, Yu Meng, Yan Xu
{"title":"Pedestrian Trajectory Prediction via Spatial Interaction Transformer Network","authors":"Tong Su, Yu Meng, Yan Xu","doi":"10.1109/ivworkshops54471.2021.9669249","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669249","url":null,"abstract":"As a core technology of the autonomous driving system, pedestrian trajectory prediction can significantly enhance the function of active vehicle safety and reduce road traffic injuries. In traffic scenes, when encountering with oncoming people, pedestrians may make sudden turns or stop immediately, which often leads to complicated trajectories. To predict such unpredictable trajectories, we can gain insights into the interaction between pedestrians. In this paper, we present a novel generative method named Spatial Interaction Transformer (SIT), which learns the spatio-temporal correlation of pedestrian trajectories through attention mechanisms. Furthermore, we introduce the conditional variational autoencoder (CVAE) [1] framework to model the future latent motion states of pedestrians. In particular, the experiments based on large-scale traffic dataset nuScenes [2] show that SIT has an outstanding performance than state-of-the-art (SOTA) methods. Experimental evaluation on the challenging ETH [3] and UCY [4] datasets confirms the robustness of our proposed model.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127034085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation 结合语义自监督和自训练的语义分割领域适应
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669255
J. Niemeijer, J. P. Schäfer
{"title":"Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation","authors":"J. Niemeijer, J. P. Schäfer","doi":"10.1109/ivworkshops54471.2021.9669255","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669255","url":null,"abstract":"This work presents a two-staged, unsupervised domain adaptation process for semantic segmentation models by combining a self-training and self-supervision strategy. Self-training (i. e., training a model on self-inferred pseudo-labels) yields competitive results for domain adaptation in recent research. However, self-training depends on high-quality pseudo-labels. On the other hand, self-supervision trains the model on a surrogate task and improves its performance on the target domain without further prerequisites.Therefore, our approach improves the model’s performance on the target domain with a novel surrogate task. To that, we continuously determine class centroids of the feature representations in the network’s pre-logit layer on the source domain. Our surrogate task clusters the pre-logit feature representations on the target domain regarding these class centroids during both training stages. After the first stage, the resulting model delivers improved pseudo-labels for the additional self-training in the second stage. We evaluate our method on two different domain adaptions, a real-world domain change from Cityscapes to the Berkeley Deep Drive dataset and a synthetic to real-world domain change from GTA5 to the Cityscapes dataset. For the real-world domain change, the evaluation shows a significant improvement of the model from 46% mIoU to 54% mIoU on the target domain. For the synthetic to real-world domain change, we achieve an improvement from 38.8% to 46.42% on the real-world target domain.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127968164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Learning to Drive from Observations while Staying Safe 学会从观察中开车,同时保持安全
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669229
Damian Boborzi, Florian Kleinicke, Jens S. Buchner, Lars Mikelsons
{"title":"Learning to Drive from Observations while Staying Safe","authors":"Damian Boborzi, Florian Kleinicke, Jens S. Buchner, Lars Mikelsons","doi":"10.1109/ivworkshops54471.2021.9669229","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669229","url":null,"abstract":"The simulation of real-world traffic is a challenging task that can be accelerated by imitation learning. Recent approaches based on neural network policies were able to present promising results in generating human-like driving behavior. However, one drawback is that certain behaviors, such as avoiding accidents, cannot be guaranteed with such policies. Therefore, we propose to combine recent imitation learning methods like GAIL with a rule-based safety framework to avoid collisions during training and testing. Our method is evaluated on highway driving scenes where all vehicles are controlled by our driving policies trained on the real-world driving dataset highD. In this setup, our method is compared to a standard neural network policy trained with GAIL. Agents using our method were able to match GAIL performance while additionally guaranteeing collision-free driving.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130264246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Machine learning based 3D object detection for navigation in unstructured environments 基于机器学习的非结构化环境中导航3D目标检测
2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) Pub Date : 2021-07-11 DOI: 10.1109/ivworkshops54471.2021.9669218
G. Nikolovski, Michael Reke, I. Elsen, S. Schiffer
{"title":"Machine learning based 3D object detection for navigation in unstructured environments","authors":"G. Nikolovski, Michael Reke, I. Elsen, S. Schiffer","doi":"10.1109/ivworkshops54471.2021.9669218","DOIUrl":"https://doi.org/10.1109/ivworkshops54471.2021.9669218","url":null,"abstract":"In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131488379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信