基于自监督变压器网络的非合作目标视觉姿态估计

IF 2.1 3区 工程技术 Q2 ENGINEERING, AEROSPACE
Quan Sun, Xuhui Pan, Xiao Ling, Bo Wang, Qinghong Sheng, Jun Li, Zhijun Yan, Ke Yu, Jiasong Wang
{"title":"基于自监督变压器网络的非合作目标视觉姿态估计","authors":"Quan Sun, Xuhui Pan, Xiao Ling, Bo Wang, Qinghong Sheng, Jun Li, Zhijun Yan, Ke Yu, Jiasong Wang","doi":"10.3390/aerospace10120997","DOIUrl":null,"url":null,"abstract":"In the realm of non-cooperative space security and on-orbit service, a significant challenge is accurately determining the pose of abandoned satellites using imaging sensors. Traditional methods for estimating the position of the target encounter problems with stray light interference in space, leading to inaccurate results. Conversely, deep learning techniques require a substantial amount of training data, which is especially difficult to obtain for on-orbit satellites. To address these issues, this paper introduces an innovative binocular pose estimation model based on a Self-supervised Transformer Network (STN) to achieve precise pose estimation for targets even under poor imaging conditions. The proposed method generated simulated training samples considering various imaging conditions. Then, by combining the concepts of convolutional neural networks (CNN) and SIFT features for each sample, the proposed method minimized the disruptive effects of stray light. Furthermore, the feedforward network in the Transformer employed in the proposed method was replaced with a global average pooling layer. This integration of CNN’s bias capabilities compensates for the limitations of the Transformer in scenarios with limited data. Comparative analysis against existing pose estimation methods highlights the superior robustness of the proposed method against variations caused by noisy sample sets. The effectiveness of the algorithm is demonstrated through simulated data, enhancing the current landscape of binocular pose estimation technology for non-cooperative targets in space.","PeriodicalId":48525,"journal":{"name":"Aerospace","volume":"35 14 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Vision-Based Pose Estimation of a Non-Cooperative Target Based on a Self-Supervised Transformer Network\",\"authors\":\"Quan Sun, Xuhui Pan, Xiao Ling, Bo Wang, Qinghong Sheng, Jun Li, Zhijun Yan, Ke Yu, Jiasong Wang\",\"doi\":\"10.3390/aerospace10120997\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the realm of non-cooperative space security and on-orbit service, a significant challenge is accurately determining the pose of abandoned satellites using imaging sensors. Traditional methods for estimating the position of the target encounter problems with stray light interference in space, leading to inaccurate results. Conversely, deep learning techniques require a substantial amount of training data, which is especially difficult to obtain for on-orbit satellites. To address these issues, this paper introduces an innovative binocular pose estimation model based on a Self-supervised Transformer Network (STN) to achieve precise pose estimation for targets even under poor imaging conditions. The proposed method generated simulated training samples considering various imaging conditions. Then, by combining the concepts of convolutional neural networks (CNN) and SIFT features for each sample, the proposed method minimized the disruptive effects of stray light. Furthermore, the feedforward network in the Transformer employed in the proposed method was replaced with a global average pooling layer. This integration of CNN’s bias capabilities compensates for the limitations of the Transformer in scenarios with limited data. Comparative analysis against existing pose estimation methods highlights the superior robustness of the proposed method against variations caused by noisy sample sets. The effectiveness of the algorithm is demonstrated through simulated data, enhancing the current landscape of binocular pose estimation technology for non-cooperative targets in space.\",\"PeriodicalId\":48525,\"journal\":{\"name\":\"Aerospace\",\"volume\":\"35 14 1\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Aerospace\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.3390/aerospace10120997\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, AEROSPACE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Aerospace","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/aerospace10120997","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, AEROSPACE","Score":null,"Total":0}
引用次数: 0

摘要

在非合作空间安全和在轨服务领域,利用成像传感器准确确定被遗弃卫星的姿态是一项重大挑战。估计目标位置的传统方法会遇到空间杂散光干扰问题,导致结果不准确。相反,深度学习技术需要大量的训练数据,而在轨卫星尤其难以获得这些数据。为解决这些问题,本文提出了一种基于自监督变换器网络(STN)的创新双目姿态估计模型,即使在成像条件较差的情况下也能实现目标的精确姿态估计。该方法考虑了各种成像条件,生成了模拟训练样本。然后,通过将卷积神经网络(CNN)的概念与每个样本的 SIFT 特征相结合,所提出的方法将杂散光的干扰影响降至最低。此外,拟议方法中采用的变形器中的前馈网络被全局平均池化层所取代。这种对 CNN 偏差能力的整合弥补了 Transformer 在数据有限情况下的局限性。与现有姿态估计方法的对比分析凸显了所提出的方法对噪声样本集引起的变化具有卓越的鲁棒性。通过模拟数据证明了该算法的有效性,提升了当前针对空间非合作目标的双目姿态估计技术的水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Vision-Based Pose Estimation of a Non-Cooperative Target Based on a Self-Supervised Transformer Network
In the realm of non-cooperative space security and on-orbit service, a significant challenge is accurately determining the pose of abandoned satellites using imaging sensors. Traditional methods for estimating the position of the target encounter problems with stray light interference in space, leading to inaccurate results. Conversely, deep learning techniques require a substantial amount of training data, which is especially difficult to obtain for on-orbit satellites. To address these issues, this paper introduces an innovative binocular pose estimation model based on a Self-supervised Transformer Network (STN) to achieve precise pose estimation for targets even under poor imaging conditions. The proposed method generated simulated training samples considering various imaging conditions. Then, by combining the concepts of convolutional neural networks (CNN) and SIFT features for each sample, the proposed method minimized the disruptive effects of stray light. Furthermore, the feedforward network in the Transformer employed in the proposed method was replaced with a global average pooling layer. This integration of CNN’s bias capabilities compensates for the limitations of the Transformer in scenarios with limited data. Comparative analysis against existing pose estimation methods highlights the superior robustness of the proposed method against variations caused by noisy sample sets. The effectiveness of the algorithm is demonstrated through simulated data, enhancing the current landscape of binocular pose estimation technology for non-cooperative targets in space.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Aerospace
Aerospace ENGINEERING, AEROSPACE-
CiteScore
3.40
自引率
23.10%
发文量
661
审稿时长
6 weeks
期刊介绍: Aerospace is a multidisciplinary science inviting submissions on, but not limited to, the following subject areas: aerodynamics computational fluid dynamics fluid-structure interaction flight mechanics plasmas research instrumentation test facilities environment material science structural analysis thermophysics and heat transfer thermal-structure interaction aeroacoustics optics electromagnetism and radar propulsion power generation and conversion fuels and propellants combustion multidisciplinary design optimization software engineering data analysis signal and image processing artificial intelligence aerospace vehicles'' operation, control and maintenance risk and reliability human factors human-automation interaction airline operations and management air traffic management airport design meteorology space exploration multi-physics interaction.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信