CLOVER:车载视觉机器人对比学习

IF 1.3 4区 工程技术 Q2 ENGINEERING, AEROSPACE
Grace Vincent, I. R. Ward, Charles Moore, Jingdao Chen, Kai Pak, Alice Yepremyan, Brian Wilson, Edwin Y. Goh
{"title":"CLOVER:车载视觉机器人对比学习","authors":"Grace Vincent, I. R. Ward, Charles Moore, Jingdao Chen, Kai Pak, Alice Yepremyan, Brian Wilson, Edwin Y. Goh","doi":"10.2514/1.a35767","DOIUrl":null,"url":null,"abstract":"Current deep-learning models employed by the planetary science community are constrained by a dearth of annotated training data for planetary images. Current models also frequently suffer from inductive bias due to domain shifts when using the same model on data obtained from different spacecraft or different time periods. Moreover, power and compute constraints preclude state-of-the-art vision models from being implemented on robotic spacecraft. In this research, we propose a self-supervised learning (SSL) framework that leverages contrastive learning techniques to improve upon state-of-the-art performance on several published Mars computer vision benchmarks. Our SSL framework enables models to be trained using fewer labels, generalize well to different tasks, and achieve higher computational efficiency. Results on published Mars computer vision benchmarks show that contrastive pretraining outperforms plain supervised learning by 2–10%. We further investigate the importance of dataset heterogeneity in mixed-domain contrastive pretraining. Using self-supervised distillation, we were also able to train a compact ResNet-18 student model to achieve better accuracy than its ResNet-152 teacher model while having 5.2 times fewer parameters. We expect that these SSL techniques will be relevant to the planning of future robotic missions, and remote sensing identification of target destinations with high scientific value.","PeriodicalId":50048,"journal":{"name":"Journal of Spacecraft and Rockets","volume":"24 36","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CLOVER: Contrastive Learning for Onboard Vision-Enabled Robotics\",\"authors\":\"Grace Vincent, I. R. Ward, Charles Moore, Jingdao Chen, Kai Pak, Alice Yepremyan, Brian Wilson, Edwin Y. Goh\",\"doi\":\"10.2514/1.a35767\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current deep-learning models employed by the planetary science community are constrained by a dearth of annotated training data for planetary images. Current models also frequently suffer from inductive bias due to domain shifts when using the same model on data obtained from different spacecraft or different time periods. Moreover, power and compute constraints preclude state-of-the-art vision models from being implemented on robotic spacecraft. In this research, we propose a self-supervised learning (SSL) framework that leverages contrastive learning techniques to improve upon state-of-the-art performance on several published Mars computer vision benchmarks. Our SSL framework enables models to be trained using fewer labels, generalize well to different tasks, and achieve higher computational efficiency. Results on published Mars computer vision benchmarks show that contrastive pretraining outperforms plain supervised learning by 2–10%. We further investigate the importance of dataset heterogeneity in mixed-domain contrastive pretraining. Using self-supervised distillation, we were also able to train a compact ResNet-18 student model to achieve better accuracy than its ResNet-152 teacher model while having 5.2 times fewer parameters. We expect that these SSL techniques will be relevant to the planning of future robotic missions, and remote sensing identification of target destinations with high scientific value.\",\"PeriodicalId\":50048,\"journal\":{\"name\":\"Journal of Spacecraft and Rockets\",\"volume\":\"24 36\",\"pages\":\"\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-12-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Spacecraft and Rockets\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.2514/1.a35767\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, AEROSPACE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Spacecraft and Rockets","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.2514/1.a35767","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, AEROSPACE","Score":null,"Total":0}
引用次数: 0

摘要

行星科学界目前使用的深度学习模型受到行星图像注释训练数据匮乏的限制。在不同航天器或不同时间段获得的数据上使用同一模型时,由于领域偏移,目前的模型也经常出现归纳偏差。此外,由于功率和计算能力的限制,最先进的视觉模型无法在机器人航天器上实现。在这项研究中,我们提出了一种自监督学习(SSL)框架,利用对比学习技术,在几个已发布的火星计算机视觉基准上提高了最先进的性能。我们的 SSL 框架可使模型使用更少的标签进行训练,对不同任务具有良好的泛化能力,并实现更高的计算效率。已发布的火星计算机视觉基准测试结果表明,对比预训练比普通监督学习的效果好 2-10%。我们进一步研究了混合域对比性预训练中数据集异质性的重要性。利用自我监督蒸馏技术,我们还能训练出一个紧凑的 ResNet-18 学生模型,其准确率比 ResNet-152 教师模型更高,而参数数量却减少了 5.2 倍。我们希望这些 SSL 技术能用于未来机器人任务的规划,以及具有高科学价值的目标目的地的遥感识别。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CLOVER: Contrastive Learning for Onboard Vision-Enabled Robotics
Current deep-learning models employed by the planetary science community are constrained by a dearth of annotated training data for planetary images. Current models also frequently suffer from inductive bias due to domain shifts when using the same model on data obtained from different spacecraft or different time periods. Moreover, power and compute constraints preclude state-of-the-art vision models from being implemented on robotic spacecraft. In this research, we propose a self-supervised learning (SSL) framework that leverages contrastive learning techniques to improve upon state-of-the-art performance on several published Mars computer vision benchmarks. Our SSL framework enables models to be trained using fewer labels, generalize well to different tasks, and achieve higher computational efficiency. Results on published Mars computer vision benchmarks show that contrastive pretraining outperforms plain supervised learning by 2–10%. We further investigate the importance of dataset heterogeneity in mixed-domain contrastive pretraining. Using self-supervised distillation, we were also able to train a compact ResNet-18 student model to achieve better accuracy than its ResNet-152 teacher model while having 5.2 times fewer parameters. We expect that these SSL techniques will be relevant to the planning of future robotic missions, and remote sensing identification of target destinations with high scientific value.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Spacecraft and Rockets
Journal of Spacecraft and Rockets 工程技术-工程:宇航
CiteScore
3.60
自引率
18.80%
发文量
185
审稿时长
4.5 months
期刊介绍: This Journal, that started it all back in 1963, is devoted to the advancement of the science and technology of astronautics and aeronautics through the dissemination of original archival research papers disclosing new theoretical developments and/or experimental result. The topics include aeroacoustics, aerodynamics, combustion, fundamentals of propulsion, fluid mechanics and reacting flows, fundamental aspects of the aerospace environment, hydrodynamics, lasers and associated phenomena, plasmas, research instrumentation and facilities, structural mechanics and materials, optimization, and thermomechanics and thermochemistry. Papers also are sought which review in an intensive manner the results of recent research developments on any of the topics listed above.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信