基于单目视觉的无人机编队定位方法

Yiming Jia, Jinglei Li, Shuai Zhang, Qinghai Yang, Wenqiang Gao, K. Kwak
{"title":"基于单目视觉的无人机编队定位方法","authors":"Yiming Jia, Jinglei Li, Shuai Zhang, Qinghai Yang, Wenqiang Gao, K. Kwak","doi":"10.1109/ISCIT55906.2022.9931313","DOIUrl":null,"url":null,"abstract":"Aiming at the issue of the relative positioning of unmanned aerial vehicles (UAVs) within formation in a GPS-suppressed environment, we propose a relative positioning method based on monocular vision. Firstly, such a method is used to identify UAVs online by a detection model and calculate their actual positions by only using visual information without auxiliary tags or other sensor information. Secondly, we improve the detection speed of this model by pruning redundant channels to meet the requirements of real-time detection when it is transplanted to the onboard computer with limited computing power. To adapt to this relative positioning method, we redesign an autonomous tracking controller with visual information as inputs. Finally, simulation experiments are carried out to verify the feasibility of the proposed method.","PeriodicalId":325919,"journal":{"name":"2022 21st International Symposium on Communications and Information Technologies (ISCIT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Monocular-Vision-Based Positioning Method for UAV Formation\",\"authors\":\"Yiming Jia, Jinglei Li, Shuai Zhang, Qinghai Yang, Wenqiang Gao, K. Kwak\",\"doi\":\"10.1109/ISCIT55906.2022.9931313\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aiming at the issue of the relative positioning of unmanned aerial vehicles (UAVs) within formation in a GPS-suppressed environment, we propose a relative positioning method based on monocular vision. Firstly, such a method is used to identify UAVs online by a detection model and calculate their actual positions by only using visual information without auxiliary tags or other sensor information. Secondly, we improve the detection speed of this model by pruning redundant channels to meet the requirements of real-time detection when it is transplanted to the onboard computer with limited computing power. To adapt to this relative positioning method, we redesign an autonomous tracking controller with visual information as inputs. Finally, simulation experiments are carried out to verify the feasibility of the proposed method.\",\"PeriodicalId\":325919,\"journal\":{\"name\":\"2022 21st International Symposium on Communications and Information Technologies (ISCIT)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 21st International Symposium on Communications and Information Technologies (ISCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCIT55906.2022.9931313\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st International Symposium on Communications and Information Technologies (ISCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCIT55906.2022.9931313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

针对gps抑制环境下的编队内无人机相对定位问题,提出了一种基于单目视觉的相对定位方法。该方法首先通过检测模型对无人机进行在线识别,仅利用视觉信息,不使用辅助标签等传感器信息,计算出无人机的实际位置。其次,在将该模型移植到计算能力有限的板载计算机上时,通过对冗余信道的修剪来提高模型的检测速度,以满足实时检测的要求。为了适应这种相对定位方法,我们重新设计了一个以视觉信息为输入的自主跟踪控制器。最后,通过仿真实验验证了所提方法的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Monocular-Vision-Based Positioning Method for UAV Formation
Aiming at the issue of the relative positioning of unmanned aerial vehicles (UAVs) within formation in a GPS-suppressed environment, we propose a relative positioning method based on monocular vision. Firstly, such a method is used to identify UAVs online by a detection model and calculate their actual positions by only using visual information without auxiliary tags or other sensor information. Secondly, we improve the detection speed of this model by pruning redundant channels to meet the requirements of real-time detection when it is transplanted to the onboard computer with limited computing power. To adapt to this relative positioning method, we redesign an autonomous tracking controller with visual information as inputs. Finally, simulation experiments are carried out to verify the feasibility of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信