2023 IEEE Intelligent Vehicles Symposium (IV)最新文献

筛选
英文 中文
A Risk Level Assessment Method for Traffic Scenarios Based on BEV Perception 基于电动汽车感知的交通场景风险等级评估方法
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186810
Liangyu Tian, Haoran Li, Wangling Wei, Sifa Zheng, Chuan Sun
{"title":"A Risk Level Assessment Method for Traffic Scenarios Based on BEV Perception","authors":"Liangyu Tian, Haoran Li, Wangling Wei, Sifa Zheng, Chuan Sun","doi":"10.1109/IV55152.2023.10186810","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186810","url":null,"abstract":"How to fully test the safety and functionality under different driving scenarios is a key issue for the development and application of autonomous vehicles. In this study, aimed at the test scenarios of autonomous vehicle, we propose a lidar-camera fusion approach for traffic environment sensing. Based on the successful Lift-Splat-Shoot (LSS) model, we propose a unique data enhancement strategy to develop the fusion accuracy. Through building a test dataset with the highprecision acquisition vehicle, the proposed method is verified that the new fusion authorism proposed in this paper can accurately distinguish the translation, scale, orientation and velocity of the target. This study can promote test scenario generation methods.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing Drivers’ Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance 通过智能驾驶辅助功能视场表征驾驶员周边视觉
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186746
Abhijat Biswas, H. Admoni
{"title":"Characterizing Drivers’ Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance","authors":"Abhijat Biswas, H. Admoni","doi":"10.1109/IV55152.2023.10186746","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186746","url":null,"abstract":"Many intelligent driver assistance algorithms try to improve on-road safety by using driver eye gaze, commonly using foveal gaze as an estimate of human attention. While human visual acuity is highest in the foveal field of view, drivers often use their peripheral vision to process scene elements. Previous work in psychology has modeled this combination of foveal and peripheral gaze as a construct known as Functional Field of View (FFoV). In this work, we study the shape and dynamics of the FFoV during active driving. We use a peripheral detection task in a virtual reality (VR) driving simulator with licensed drivers in urban driving settings. We find evidence that supports a vertically asymmetric (upward-inhibited) shape of the FFoV in our active driving task, similar to previous work in non-driving settings. Additionally, we show that this asymmetry disappears when the same peripheral detection task is conducted in a non-driving setting. Finally, we also examine the dynamic nature of the FFoV. Our data indicates that drivers’ peripheral target detection ability is inhibited right after saccades but recovers once drivers fixate for some time. The findings of the FFoV’s task-dependent nature as well as systematic asymmetries and inhibitions have implications for gaze-based intelligent driving assistance systems.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132765565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PMR-CNN: Prototype Mixture R-CNN for Few-Shot Object Detection PMR-CNN:用于少量目标检测的原型混合R-CNN
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186683
Jiancong Zhou, Jilin Mei, Haoyu Li, Yu Hu
{"title":"PMR-CNN: Prototype Mixture R-CNN for Few-Shot Object Detection","authors":"Jiancong Zhou, Jilin Mei, Haoyu Li, Yu Hu","doi":"10.1109/IV55152.2023.10186683","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186683","url":null,"abstract":"Few-shot object detection is a challenging task because of the limited annotation data. Under the limitation of few-shot samples, images from the same class may differ significantly in appearance and pose. Although the research has progressed considerably since adding the prototype vector to few-shot object detection, the previous paradigm is still constrained by several factors: (1) using a single prototype to represent the support image tends to cause semantic ambiguity; (2) the way of extracting prototypes is too simple, like global average pooling, which makes prototypes not representative enough. In this work, we design PMR-CNN to address the above limitations. PMR-CNN proposes a new method of prototype generation and enhances the representative information by using multiple prototypes to represent support images. For experiments, we not only evaluate our method on general image dataset MS COCO, but also evaluate on SiTi (a real-world autonomous driving dataset collected by us). Experiment on the few-shot object detection benchmark shows that we have a significant advantage over the previous methods. Code is available at: https://github.com/Chientsung-Chou/PMR-CNN.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancing Robustness of LiDAR-Based Perception in Adverse Weather using Point Cloud Augmentations 利用点云增强激光雷达感知在恶劣天气下的鲁棒性
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186696
Sven Teufel, Jörg Gamerdinger, G. Volk, Christoph Gerum, O. Bringmann
{"title":"Enhancing Robustness of LiDAR-Based Perception in Adverse Weather using Point Cloud Augmentations","authors":"Sven Teufel, Jörg Gamerdinger, G. Volk, Christoph Gerum, O. Bringmann","doi":"10.1109/IV55152.2023.10186696","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186696","url":null,"abstract":"LiDAR-based perception systems have become widely adopted in autonomous vehicles. However, their performance can be severely degraded in adverse weather conditions, such as rain, snow or fog. To address this challenge, we propose a method for improving the robustness of LiDAR-based perception in adverse weather, using data augmentation techniques on point clouds. We use novel as well as established data augmentation techniques, such as realistic weather simulations, to provide a wide variety of training data for LiDAR-based object detectors. The performance of the state-of-the-art detector Voxel R-CNN using the proposed augmentation techniques is evaluated on a data set of real-world point clouds collected in adverse weather conditions. The achieved improvements in average precision (AP) are 4.00 p.p. in fog, 3.35 p.p. in snow, and 4.87 p.p. in rain at moderate difficulty. Our results suggest that data augmentations on point clouds are an effective way to improve the robustness of LiDAR-based object detection in adverse weather.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134007026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Uncertainty Aware Task Allocation for Human-Automation Cooperative Recognition in Autonomous Driving Systems 基于不确定性感知的自动驾驶人机协同识别任务分配
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186725
Atsushi Kuribayashi, E. Takeuchi, Alexander Carballo, Yoshio Ishiguro, K. Takeda
{"title":"Uncertainty Aware Task Allocation for Human-Automation Cooperative Recognition in Autonomous Driving Systems","authors":"Atsushi Kuribayashi, E. Takeuchi, Alexander Carballo, Yoshio Ishiguro, K. Takeda","doi":"10.1109/IV55152.2023.10186725","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186725","url":null,"abstract":"Cooperative recognition, a method to achieve human-automation cooperation in the recognition phase of the autonomous driving system, has been proposed to address the challenges in the conventional control phase cooperation, e.g., taking over vehicle control. In cooperative recognition, the operator intervenes in recognition tasks that are difficult for the automated system alone to improve driving efficiency and safety. The challenge is the integration of both human and automated systems while both participants have different characteristics, processing capabilities, and uncertainty in the decisions (recognition results). The objectives of this study are task allocation (i.e., when and for which targets the operator should intervene) taking into account the intervention efficiency and human state. And also combine the human intervention and recognition result of the automated systems to solve the uncertainties in both participants. We formulated this problem with a Partially Observable Markov Decision Process (POMDP). The simulator experiment indicated that the recognition result of the automated system and the operator’s intervention were stochastically combined. The intervention requests to the operator adapted to the operator state and could be reduced while maintaining driving efficiency and minimizing risk omissions.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133600827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VIF-GNN: A Novel Agent Trajectory Prediction Model based on Virtual Interaction Force and GNN 基于虚拟交互力和GNN的智能体轨迹预测模型VIF-GNN
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186793
Yuning Wang, Zhiyuan Liu, Haotian Lin, Jinhao Li, Ruochen Li, Jianqiang Wang
{"title":"VIF-GNN: A Novel Agent Trajectory Prediction Model based on Virtual Interaction Force and GNN","authors":"Yuning Wang, Zhiyuan Liu, Haotian Lin, Jinhao Li, Ruochen Li, Jianqiang Wang","doi":"10.1109/IV55152.2023.10186793","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186793","url":null,"abstract":"Agent trajectory prediction of traffic scenarios is a significant module of environment reasoning and autonomous vehicle decision, and the core challenge is the ability to interaction reasoning under complex scenes. Previous prediction models are either not precise enough or require massive computational costs. In this paper, we propose VIF-GNN, a novel traffic agent trajectory prediction framework based on the Virtual Interaction Force (VIF) concept and Graph Neural Network, which consists of semantic feature engineering, a subgraph encoder, a global graph, and the trajectory decoder. In particular, this method extracts vectorized features including VIF adjacent matrix from raw inputs and transfers them into graph nodes through the subgraph encoder. The global graph module obtains spatiotemporal reasoning information from four various interaction layers combined with the VIF prior knowledge. And the decoder translates the graph into trajectories of the target agent. Experiments prove that VIF-GNN could achieve precise forecasting on both single and multi-modal prediction task compared with the baselines while maintaining a relatively light parameter size scale, ensuring the real-time performance of vehicle platform applications.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133059372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly Supervised Multi-Modal 3D Human Body Pose Estimation for Autonomous Driving 用于自动驾驶的弱监督多模态三维人体姿态估计
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186575
P. Bauer, Arij Bouazizi, U. Kressel, F. Flohr
{"title":"Weakly Supervised Multi-Modal 3D Human Body Pose Estimation for Autonomous Driving","authors":"P. Bauer, Arij Bouazizi, U. Kressel, F. Flohr","doi":"10.1109/IV55152.2023.10186575","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186575","url":null,"abstract":"Accurate 3D human pose estimation (3D HPE) is crucial for enabling autonomous vehicles (AVs) to make informed decisions and respond proactively in critical road scenarios. Promising results of 3D HPE have been gained in several domains such as human-computer interaction, robotics, sports and medical analytics, often based on data collected in well-controlled laboratory environments. Nevertheless, the transfer of 3D HPE methods to AVs has received limited research attention, due to the challenges posed by obtaining accurate 3D pose annotations and the limited suitability of data from other domains.We present a simple yet efficient weakly supervised approach for 3D HPE in the AV context by employing a high-level sensor fusion between camera and LiDAR data. The weakly supervised setting enables training on the target datasets without any 2D / 3D keypoint labels by using an off-the-shelf 2D joint extractor and pseudo labels generated from LiDAR to image projections. Our approach outperforms state-of-the-art results by up to ~ 13% on the Waymo Open Dataset in the weakly supervised setting and achieves state-of-the-art results in the supervised setting.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133113367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Formal Verification of Safety Architectures for Automated Driving 自动驾驶安全架构的正式验证
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186763
Clovis Eberhart, Jérémy Dubut, J. Haydon, I. Hasuo
{"title":"Formal Verification of Safety Architectures for Automated Driving","authors":"Clovis Eberhart, Jérémy Dubut, J. Haydon, I. Hasuo","doi":"10.1109/IV55152.2023.10186763","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186763","url":null,"abstract":"Safety architectures play a crucial role in the safety assurance of automated driving vehicles (ADVs). They can be used as safety envelopes of black-box ADV controllers, and for graceful degradation from one ODD to another. Building on our previous work on the formalization of responsibility-sensitive safety (RSS), we introduce a novel program logic that accommodates assume-guarantee reasoning and fallback-like constructs. This allows us to formally define and prove the safety of existing and novel safety architectures. We apply the logic to a pull over scenario and experimentally evaluate the resulting safety architecture.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133539178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Predicting Driver Behavior on the Highway with Multi-Agent Adversarial Inverse Reinforcement Learning 基于多智能体对抗逆强化学习的高速公路驾驶员行为预测
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186547
Henrik Radtke, Henrik Bey, Moritz Sackmann, Torsten Schön
{"title":"Predicting Driver Behavior on the Highway with Multi-Agent Adversarial Inverse Reinforcement Learning","authors":"Henrik Radtke, Henrik Bey, Moritz Sackmann, Torsten Schön","doi":"10.1109/IV55152.2023.10186547","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186547","url":null,"abstract":"For the implementation of autonomous or highly automated driving functions, predicting the driver behavior of the surrounding road users is highly relevant. This work investigates the prediction of driver behavior on the highway. For this purpose, different approaches, such as rule-based functions or behavior cloning models, have been proposed in the literature. In this work, a realization with Adversarial Inverse Reinforcement Learning (AIRL) is implemented. AIRL combines behavior cloning and reinforcement learning approaches. Based on Generative Adversarial Networks (GAN), it uses competition between a discriminator and a generator. This paper uses Proximal Policy Optimization (PPO) as the generator in AIRL. Further, the prediction is realized in a multi-agent fashion to achieve the the most accurate interaction-aware prediction. Our evaluation shows that this work achieves better accuracy than rule-based models and comparable accuracy to the state-of-the-art methods, but additionally has an extremely low inference time.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116140209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learn-able Evolution Convolutional Siamese Neural Network for Adaptive Driving Style Preference Prediction 自适应驾驶风格偏好预测的可学习进化卷积连体神经网络
2023 IEEE Intelligent Vehicles Symposium (IV) Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186593
Fatemeh Koochaki, Z. Zheng, K. Akash, Teruhisa Misu
{"title":"Learn-able Evolution Convolutional Siamese Neural Network for Adaptive Driving Style Preference Prediction","authors":"Fatemeh Koochaki, Z. Zheng, K. Akash, Teruhisa Misu","doi":"10.1109/IV55152.2023.10186593","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186593","url":null,"abstract":"We propose a framework for detecting user driving style preference with multimodal signals, to adapt autonomous vehicle driving style to drivers’ preferences in an automatic manner. Mismatch between the automated vehicle driving style and the driver’s preference can lead to more frequent takeovers or even disabling the automation features. We collected multi-modal data from 36 human participants on a driving simulator, including eye gaze, steering grip force, driving maneuvers, brake and throttle pedal inputs as well as foot distance from pedals, pupil diameter, galvanic skin response, heart rate, and situational drive context. Based on the data, we constructed a data-driven framework using convolutional Siamese neural networks (CSNNs) to identify preferred driving styles. The model performance has significant improvement compared to that in the existing literature. In addition, we demonstrated that the proposed framework can improve model performance without network training process using data from target users. This result validates the potential of online model adaption with continued driver-system interaction. We also perform an ablation study on sensing modalities and present the importance of each data channel.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121782022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信