{"title":"A Longitudinal and Lateral Coordinated Control Method of Autonomous Vehicles Considering Time-Varying Delay","authors":"Zhaobo Qin;Wang Liang;Zuoxu Zang;Liang Chen;Manjiang Hu;Qingjia Cui;Yougang Bian","doi":"10.1109/TIV.2024.3393983","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393983","url":null,"abstract":"In order to improve the control accuracy and stability of autonomous vehicles under large delay conditions, the paper proposes a longitudinal and lateral coordinated control method considering fixed sensor delay, actuator lag, and time-varying CAN communication delay. An adaptive all-pass filter time-delay estimator (AAPF-TDE) based on finite impulse response (FIR) and its improved strategies are designed to realize accurate online estimation of CAN communication delay. According to the estimated delay, a longitudinal and lateral coupling time-delay dynamics model with prediction revision and delay augmentation is constructed. A model predictive controller (MPC) combined with Lyapunov asymptotic stability constraints is then designed. CarSim/Simulink co-simulation and vehicle experiment results show that the proposed controller can improve the vehicle stability effectively while ensuring the longitudinal and lateral control accuracy under large delay conditions compared with the controller without considering delay or only considering fixed delay.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7125-7137"},"PeriodicalIF":14.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"StripUnet: A Method for Dense Road Extraction From Remote Sensing Images","authors":"Xianzhi Ma;Xiaokai Zhang;Daoxiang Zhou;Zehua Chen","doi":"10.1109/TIV.2024.3393508","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393508","url":null,"abstract":"Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7097-7109"},"PeriodicalIF":14.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andika Aji Wijaya;Fitri Yakub;Shahrum Shah Abdullah;Rini Akmeliawati;Salem Aljazzar;Itoh Makoto
{"title":"Neuroadaptive Natural Logarithm Sliding Mode Control for Nonlinear Active Suspension Systems","authors":"Andika Aji Wijaya;Fitri Yakub;Shahrum Shah Abdullah;Rini Akmeliawati;Salem Aljazzar;Itoh Makoto","doi":"10.1109/TIV.2024.3393840","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393840","url":null,"abstract":"A neuroadaptive controller based on natural logarithm sliding mode control (lnSMC) is proposed for an active suspension system with unknown nonlinear dynamics and uncertain model parameters. The proposed control scheme ensures that the controlled states are constrained within the desired bound of heave and pitch motions, thereby eliminating the need for trial and error in determining the lnSMC controller parameters. Moreover, the unknown nonlinear system dynamics are approximated by a radial basis function neural network (RBFNN), which updates its weights continuously in real-time. Considering the high degree of parameter uncertainties in suspension systems, an adaptive law based on a gradient algorithm with a projection operator is incorporated to estimate the unknown parameters (e.g., vehicle mass and mass moment of inertia). Simulation studies on a half-car active suspension model are carried out to evaluate the performance and robustness of the proposed controller under various road disturbances, including bumps and random road profiles. For comparative purposes, neuroadaptive controllers based on classical sliding mode and terminal sliding mode are designed as benchmark controllers. The simulation results indicated that the proposed controller achieves a better suspension performance indicators compared to the benchmark controllers.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7110-7124"},"PeriodicalIF":14.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Creß;Walter Zimmer;Nils Purschke;Bach Ngoc Doan;Sven Kirchner;Venkatnarayanan Lakshminarasimhan;Leah Strand;Alois C. Knoll
{"title":"TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras","authors":"Christian Creß;Walter Zimmer;Nils Purschke;Bach Ngoc Doan;Sven Kirchner;Venkatnarayanan Lakshminarasimhan;Leah Strand;Alois C. Knoll","doi":"10.1109/TIV.2024.3393749","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393749","url":null,"abstract":"Event-based cameras are predestined for Intelligent Transportation Systems (ITS). They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night. However, event-based images lack color and texture compared to images from a conventional RGB camera. Considering that, data fusion between event-based and conventional cameras can combine the strengths of both modalities. For this purpose, extrinsic calibration is necessary. To the best of our knowledge, no targetless calibration between event-based and RGB cameras can handle multiple moving objects, nor does data fusion optimized for the domain of roadside ITS exist. Furthermore, synchronized event-based and RGB camera datasets considering roadside perspective are not yet published. To fill these research gaps, based on our previous work, we extended our targetless calibration approach with clustering methods to handle multiple moving objects. Furthermore, we developed an Early Fusion, Simple Late Fusion, and a novel Spatiotemporal Late Fusion method. Lastly, we published the TUMTraf Event Dataset, which contains more than 4,111 synchronized event-based and RGB images with 50,496 labeled 2D boxes. During our extensive experiments, we verified the effectiveness of our calibration method with multiple moving objects. Furthermore, compared to a single RGB camera, we increased the detection performance of up to +9% mAP in the day and up to +13% mAP during the challenging night with our presented event-based sensor fusion methods.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 7","pages":"5186-5203"},"PeriodicalIF":14.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508494","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haolong Fu;Hanhao Liu;Jin Yuan;Xuan He;Jiacheng Lin;Zhiyong Li
{"title":"YOLO-Adaptor: A Fast Adaptive One-Stage Detector for Non-Aligned Visible-Infrared Object Detection","authors":"Haolong Fu;Hanhao Liu;Jin Yuan;Xuan He;Jiacheng Lin;Zhiyong Li","doi":"10.1109/TIV.2024.3393015","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393015","url":null,"abstract":"Visible-infrared object detection has attracted increasing attention recently due to its superior performance and cost-efficiency. Most existing methods focus on the detection of strictly-aligned data, significantly limiting its practical applications. Although several researchers have attempted to explore weakly-aligned visible-infrared object detection, they are limited to small translational deviations and suffer from a low detection speed. This paper first explores non-aligned visible-infrared object detection with complex deviations in translation, scaling, and rotation, and proposes a fast one-stage detector YOLO-Adaptor, which introduces a lightweight multi-modal adaptor to simultaneously predict alignment parameters and confidence weights between modalities. The adaptor adopts a feature-level alignment during the feature extraction process, ensuring high alignment efficiency. Moreover, we introduce a feature contrastive learning loss to guide the alignment learning of the adaptor, aiming to reduce the representation gap between the two modalities in hyperbolic space to implement feature spatial and distributional consistency. Extensive experiments are conducted on three datasets, including one weakly-aligned and two non-aligned datasets, and the experimental results demonstrate that YOLO-Adaptor could achieve significant performance improvements in terms of speed and accuracy.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7070-7083"},"PeriodicalIF":14.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigating Partially Unknown Environments: A Weakly Supervised Learning Approach to Path Planning","authors":"Liqun Huang;Runqi Chai;Kaiyuan Chen;Jinning Zhang;Senchun Chai;Yuanqing Xia","doi":"10.1109/TIV.2024.3393068","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393068","url":null,"abstract":"In fire rescue missions, the critical research concern revolves around enabling autonomous path planning for mobile robots to quickly and safely navigate to target points. This paper focuses on sampling-based path planning methods under weak supervision. In order to enhance path quality and computational speed, we employ deep learning to perform non-uniform sampling on sampling-based methods, focusing on regions where optimal paths are more likely to exist. Specifically, the generation of non-uniform sampling regions is regarded as a semantic segmentation problem. In this context, diverse map information is utilized to predict non-uniform sampling regions. Inspired by attention mechanisms in deep learning, we propose an attention-guided model for non-uniform sampling path planning. The learning-driven path planning process comprises offline dataset generation, model training, and online model prediction. However, the offline dataset generation is often time-consuming and resource-intensive. To address this challenge, we propose a weakly supervised strategy, which necessitates the generation of only one single path as ground truth per scenario in semantic segmentation training. Furthermore, considering the potential existence of unknown obstacles along the reference path in real-world settings, we leverage deep reinforcement learning to ensure collision-free path tracking in unknown environments. Finally, extensive experimental simulations are conducted to verify the performance of the attention-guided model and collision-free tracking, and demonstrate the superiority of our proposed weakly supervised strategy.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7084-7096"},"PeriodicalIF":14.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144510808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zexing Li;Yafei Wang;Ruitao Zhang;Fei Ding;Chongfeng Wei;Jun-Guo Lu
{"title":"A LiDAR-OpenStreetMap Matching Method for Vehicle Global Position Initialization Based on Boundary Directional Feature Extraction","authors":"Zexing Li;Yafei Wang;Ruitao Zhang;Fei Ding;Chongfeng Wei;Jun-Guo Lu","doi":"10.1109/TIV.2024.3393229","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393229","url":null,"abstract":"OpenStreetMap (OSM) is a low-cost promising alternative to provide the environment representation with global consistency when the pre-built LiDAR maps are not available. However, existing descriptor-based perception-OSM matching methods suffered from significant disparities in dimension and precision compared to the perception and pre-built LiDAR map, leading to degradation of matching accuracy. To improve the accuracy and robustness of matching-based global position initialization using OSM, this article proposes a novel boundary-relative orientation feature descriptor with rotational consistency, facilitating the unified representation of perception and OSM. The proposed scale-free descriptor, derived from the relative changes of boundary trend within the OSM, significantly reduces the reliance on the precision of planar spatial information, thereby promoting the matching accuracy required for global position initialization. Furthermore, the performance of global position initialization with the proposed descriptor is evaluated on KITTI datasets. The results illustrate the proposed descriptor outperforms 3D-3D and 2D-2D descriptor matching-based methods, especially in urban scenarios.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7485-7497"},"PeriodicalIF":14.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Position Uncertainty-Integrated Potential Function for Collision Avoidance Systems Based on Model Predictive Control","authors":"Seungho Han;Junsu Kwon;Kyung-Soo Kim","doi":"10.1109/TIV.2024.3392895","DOIUrl":"https://doi.org/10.1109/TIV.2024.3392895","url":null,"abstract":"In this article, an avoidance potential function with an uncertainty-integrated isopotential contour is proposed to perform vehicle collision avoidance with model predictive control (MPC) under the presence of uncertainty. A superquadric-based potential function describing an object is modified to reflect uncertainty, caused by detection sensor noise, using the suggested uncertainty area and surface-morphing function. In particular, the uncertainty area is designed such that sensor uncertainty expands the isopotential contour at the object boundary. Thus, the proposed potential function is robust against uncertainty in the object position. The surface-morphing function is designed to modify the isopotential contour at the object boundary by comparing the size of the uncertainty area and the object. Then, the proposed model is directly integrated into the cost function of MPC to generate the optimal steering angle and engine torque as control inputs. Specifically, the iterative linear quadratic regulator (iLQR) among MPCs is adopted to handle nonlinearity in the system and potential function. The proposed model is validated in the virtual test, which shows that the vehicle avoids both static and moving objects using the proposed model even when uncertainty exists without parameter adjustment.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7045-7058"},"PeriodicalIF":14.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaosong Li;Wenjun Zou;Jiaxin Gao;Yuming Yin;Dongyoon Kim;Sen Yang;Shengbo Eben Li
{"title":"Fast Online Computation of MPC-Based Integrated Decision Control for Autonomous Vehicles","authors":"Shaosong Li;Wenjun Zou;Jiaxin Gao;Yuming Yin;Dongyoon Kim;Sen Yang;Shengbo Eben Li","doi":"10.1109/TIV.2024.3393013","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393013","url":null,"abstract":"In the paper, an integrated decision control (IDC) architecture has been introduced, seamlessly integrating autonomous decision-making and motion control into a unified processing framework. This architecture primarily comprises two key modules: a static path planner and a MPC-based dynamic optimal tracker. The former exclusively utilizes static information, such as road geometry, roadside signs, and road markings, to formulate a candidate path set. Building upon this foundation, the latter autonomously determines the most suitable driving path from the candidate paths. It integrates vehicle dynamics with dynamic information, including traffic participants and traffic lights, to design a constrained trajectory tracking controller for achieving precise motion control. Furthermore, from an engineering practice perspective, a dimension reduction control strategy for both control inputs and system constraints has been devised to enhance the real-time performance of the IDC system. Experimental results affirm that the proposed strategy effectively facilitates autonomous and secure driving of vehicles in open road traffic environments.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7059-7069"},"PeriodicalIF":14.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Zeshan Alam;Zeeshan Kaleem;Sousso Kelouwani
{"title":"Glare Mitigation for Enhanced Autonomous Vehicle Perception","authors":"Muhammad Zeshan Alam;Zeeshan Kaleem;Sousso Kelouwani","doi":"10.1109/TIV.2024.3392675","DOIUrl":"https://doi.org/10.1109/TIV.2024.3392675","url":null,"abstract":"Vision sensors are versatile and can capture a wide range of visual cues, such as color, texture, shape, and depth. This versatility, along with the relatively inexpensive availability of machine vision cameras, played an important role in adopting vision-based environment perception systems in autonomous vehicles (AVs). However, vision-based perception systems can be easily affected by glare in the presence of a bright source of light, such as the sun or the headlights of the oncoming vehicle at night or simply by light reflecting off snow or ice-covered surfaces; scenarios encountered frequently during driving. In this paper, we investigate various glare reduction techniques, including the proposed saturated pixel-aware glare reduction technique for improved performance of the computer vision (CV) tasks employed by the perception layer of AVs. We evaluate these glare reduction methods based on various performance metrics of the CV algorithms used by the perception layer. Specifically, we considered object detection, object recognition, object tracking, depth estimation, and lane detection which are crucial for autonomous driving. The experimental findings validate the efficacy of the proposed glare reduction approach, showcasing enhanced performance across diverse perception tasks and remarkable resilience against varying levels of glare.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7030-7044"},"PeriodicalIF":14.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}