合作机器人可见光定位:智能压缩传感和广义泛函模型框架

IF 8.7 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Sicong Liu;Xianyao Wang;Jian Song;Zhu Han
{"title":"合作机器人可见光定位:智能压缩传感和广义泛函模型框架","authors":"Sicong Liu;Xianyao Wang;Jian Song;Zhu Han","doi":"10.1109/JSTSP.2024.3368661","DOIUrl":null,"url":null,"abstract":"This article presents a compressed sensing (CS) based framework for visible light positioning (VLP), designed to achieve simultaneous and precise localization of multiple intelligent robots within an indoor factory. The framework leverages light-emitting diodes (LEDs) originally intended for illumination purposes as anchors, repurposing them for the localization of robots equipped with photodetectors. By predividing the plane encompassing the robot positions into a grid, with the number of robots being notably fewer than the grid points, the inherent sparsity of the arrangement is harnessed. To construct an effective sparse measurement model, a sequence of aggregation, autocorrelation, and cross-correlation operations are employed to the signals. Consequently, the complex task of localizing multiple targets is reformulated into a sparse recovery problem, amenable to resolution through CS-based algorithms. Notably, the localization precision is augmented by inter-target cooperation among the robots, and inter-anchor cooperation among distinct LEDs. Furthermore, to fortify the robustness of localization, a generative adversarial network (GAN) is introduced into the proposed localization framework. The simulation results affirm that the proposed framework can successfully achieve centimeter-level accuracy for simultaneous localization of multiple targets.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"18 3","pages":"407-418"},"PeriodicalIF":8.7000,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cooperative Robotics Visible Light Positioning: An Intelligent Compressed Sensing and GAN-Enabled Framework\",\"authors\":\"Sicong Liu;Xianyao Wang;Jian Song;Zhu Han\",\"doi\":\"10.1109/JSTSP.2024.3368661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article presents a compressed sensing (CS) based framework for visible light positioning (VLP), designed to achieve simultaneous and precise localization of multiple intelligent robots within an indoor factory. The framework leverages light-emitting diodes (LEDs) originally intended for illumination purposes as anchors, repurposing them for the localization of robots equipped with photodetectors. By predividing the plane encompassing the robot positions into a grid, with the number of robots being notably fewer than the grid points, the inherent sparsity of the arrangement is harnessed. To construct an effective sparse measurement model, a sequence of aggregation, autocorrelation, and cross-correlation operations are employed to the signals. Consequently, the complex task of localizing multiple targets is reformulated into a sparse recovery problem, amenable to resolution through CS-based algorithms. Notably, the localization precision is augmented by inter-target cooperation among the robots, and inter-anchor cooperation among distinct LEDs. Furthermore, to fortify the robustness of localization, a generative adversarial network (GAN) is introduced into the proposed localization framework. The simulation results affirm that the proposed framework can successfully achieve centimeter-level accuracy for simultaneous localization of multiple targets.\",\"PeriodicalId\":13038,\"journal\":{\"name\":\"IEEE Journal of Selected Topics in Signal Processing\",\"volume\":\"18 3\",\"pages\":\"407-418\"},\"PeriodicalIF\":8.7000,\"publicationDate\":\"2024-02-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Selected Topics in Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10443444/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10443444/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了一种基于压缩传感(CS)的可见光定位(VLP)框架,旨在实现室内工厂内多个智能机器人的同时精确定位。该框架利用原本用于照明目的的发光二极管(LED)作为锚点,将其重新用于配备光电探测器的机器人的定位。通过将包含机器人位置的平面预先划分为网格,机器人的数量明显少于网格点,从而利用了布置的固有稀疏性。为了构建有效的稀疏测量模型,需要对信号进行一系列聚合、自相关和交叉相关操作。因此,定位多个目标的复杂任务被重新表述为一个稀疏恢复问题,可通过基于 CS 的算法加以解决。值得注意的是,机器人之间的目标间合作以及不同 LED 之间的锚点间合作提高了定位精度。此外,为了加强定位的鲁棒性,在拟议的定位框架中引入了生成对抗网络(GAN)。仿真结果表明,所提出的框架可成功实现厘米级精度的多目标同时定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cooperative Robotics Visible Light Positioning: An Intelligent Compressed Sensing and GAN-Enabled Framework
This article presents a compressed sensing (CS) based framework for visible light positioning (VLP), designed to achieve simultaneous and precise localization of multiple intelligent robots within an indoor factory. The framework leverages light-emitting diodes (LEDs) originally intended for illumination purposes as anchors, repurposing them for the localization of robots equipped with photodetectors. By predividing the plane encompassing the robot positions into a grid, with the number of robots being notably fewer than the grid points, the inherent sparsity of the arrangement is harnessed. To construct an effective sparse measurement model, a sequence of aggregation, autocorrelation, and cross-correlation operations are employed to the signals. Consequently, the complex task of localizing multiple targets is reformulated into a sparse recovery problem, amenable to resolution through CS-based algorithms. Notably, the localization precision is augmented by inter-target cooperation among the robots, and inter-anchor cooperation among distinct LEDs. Furthermore, to fortify the robustness of localization, a generative adversarial network (GAN) is introduced into the proposed localization framework. The simulation results affirm that the proposed framework can successfully achieve centimeter-level accuracy for simultaneous localization of multiple targets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Journal of Selected Topics in Signal Processing
IEEE Journal of Selected Topics in Signal Processing 工程技术-工程:电子与电气
CiteScore
19.00
自引率
1.30%
发文量
135
审稿时长
3 months
期刊介绍: The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others. The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信