Cooperative Multi-Modal Localization in Connected and Autonomous Vehicles

Nikos Piperigkos, A. Lalos, K. Berberidis, C. Anagnostopoulos
{"title":"Cooperative Multi-Modal Localization in Connected and Autonomous Vehicles","authors":"Nikos Piperigkos, A. Lalos, K. Berberidis, C. Anagnostopoulos","doi":"10.1109/CAVS51000.2020.9334558","DOIUrl":null,"url":null,"abstract":"Cooperative Localization is expected to play a crucial role in various applications in the field of Connected and Autonomous vehicles (CAVs). Future 5G wireless systems are expected to enable cost-effective Vehicle-to-Everything (V2X) systems, allowing CAVs to share with the other entities of the network the data they collect and measure. Typical measurement models usually deployed for this problem, are absolute position from Global Positioning System (GPS), relative distance and azimuth angle to neighbouring vehicles, extracted from Light Detection and Ranging (LIDAR) or Radio Detection and Ranging (RADAR) sensors. In this paper, we provide a cooperative localization approach that performs multi modal-fusion between the interconnected vehicles, by representing a fleet of connected cars as an undirected graph, encoding each vehicle position relative to its neighbouring vehicles. This method is based on: i) the Laplacian Processing, a Graph Signal Processing tool that allows to capture intrinsic geometry of the undirected graph of vehicles rather than their absolute position on global coordinate system and ii) the temporal coherence due to motion patterns of the moving vehicles.","PeriodicalId":409507,"journal":{"name":"2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAVS51000.2020.9334558","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Cooperative Localization is expected to play a crucial role in various applications in the field of Connected and Autonomous vehicles (CAVs). Future 5G wireless systems are expected to enable cost-effective Vehicle-to-Everything (V2X) systems, allowing CAVs to share with the other entities of the network the data they collect and measure. Typical measurement models usually deployed for this problem, are absolute position from Global Positioning System (GPS), relative distance and azimuth angle to neighbouring vehicles, extracted from Light Detection and Ranging (LIDAR) or Radio Detection and Ranging (RADAR) sensors. In this paper, we provide a cooperative localization approach that performs multi modal-fusion between the interconnected vehicles, by representing a fleet of connected cars as an undirected graph, encoding each vehicle position relative to its neighbouring vehicles. This method is based on: i) the Laplacian Processing, a Graph Signal Processing tool that allows to capture intrinsic geometry of the undirected graph of vehicles rather than their absolute position on global coordinate system and ii) the temporal coherence due to motion patterns of the moving vehicles.
网联和自动驾驶汽车的协同多模态定位
协作定位有望在联网和自动驾驶汽车(cav)领域的各种应用中发挥关键作用。未来的5G无线系统有望实现经济高效的车联网(V2X)系统,使自动驾驶汽车能够与网络中的其他实体共享它们收集和测量的数据。通常针对该问题部署的典型测量模型是来自全球定位系统(GPS)的绝对位置,与邻近车辆的相对距离和方位角,从光探测和测距(LIDAR)或无线电探测和测距(RADAR)传感器中提取。在本文中,我们提供了一种协作定位方法,通过将互联汽车车队表示为无向图,对每个车辆相对于其相邻车辆的位置进行编码,在互联车辆之间执行多模态融合。该方法基于:i)拉普拉斯处理,一种图形信号处理工具,允许捕获车辆无向图的内在几何形状,而不是它们在全球坐标系上的绝对位置;ii)由于移动车辆的运动模式而产生的时间相干性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信