Grace Vincent, I. R. Ward, Charles Moore, Jingdao Chen, Kai Pak, Alice Yepremyan, Brian Wilson, Edwin Y. Goh
{"title":"CLOVER:车载视觉机器人对比学习","authors":"Grace Vincent, I. R. Ward, Charles Moore, Jingdao Chen, Kai Pak, Alice Yepremyan, Brian Wilson, Edwin Y. Goh","doi":"10.2514/1.a35767","DOIUrl":null,"url":null,"abstract":"Current deep-learning models employed by the planetary science community are constrained by a dearth of annotated training data for planetary images. Current models also frequently suffer from inductive bias due to domain shifts when using the same model on data obtained from different spacecraft or different time periods. Moreover, power and compute constraints preclude state-of-the-art vision models from being implemented on robotic spacecraft. In this research, we propose a self-supervised learning (SSL) framework that leverages contrastive learning techniques to improve upon state-of-the-art performance on several published Mars computer vision benchmarks. Our SSL framework enables models to be trained using fewer labels, generalize well to different tasks, and achieve higher computational efficiency. Results on published Mars computer vision benchmarks show that contrastive pretraining outperforms plain supervised learning by 2–10%. We further investigate the importance of dataset heterogeneity in mixed-domain contrastive pretraining. Using self-supervised distillation, we were also able to train a compact ResNet-18 student model to achieve better accuracy than its ResNet-152 teacher model while having 5.2 times fewer parameters. We expect that these SSL techniques will be relevant to the planning of future robotic missions, and remote sensing identification of target destinations with high scientific value.","PeriodicalId":50048,"journal":{"name":"Journal of Spacecraft and Rockets","volume":"24 36","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CLOVER: Contrastive Learning for Onboard Vision-Enabled Robotics\",\"authors\":\"Grace Vincent, I. R. Ward, Charles Moore, Jingdao Chen, Kai Pak, Alice Yepremyan, Brian Wilson, Edwin Y. Goh\",\"doi\":\"10.2514/1.a35767\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current deep-learning models employed by the planetary science community are constrained by a dearth of annotated training data for planetary images. Current models also frequently suffer from inductive bias due to domain shifts when using the same model on data obtained from different spacecraft or different time periods. Moreover, power and compute constraints preclude state-of-the-art vision models from being implemented on robotic spacecraft. In this research, we propose a self-supervised learning (SSL) framework that leverages contrastive learning techniques to improve upon state-of-the-art performance on several published Mars computer vision benchmarks. Our SSL framework enables models to be trained using fewer labels, generalize well to different tasks, and achieve higher computational efficiency. Results on published Mars computer vision benchmarks show that contrastive pretraining outperforms plain supervised learning by 2–10%. We further investigate the importance of dataset heterogeneity in mixed-domain contrastive pretraining. Using self-supervised distillation, we were also able to train a compact ResNet-18 student model to achieve better accuracy than its ResNet-152 teacher model while having 5.2 times fewer parameters. We expect that these SSL techniques will be relevant to the planning of future robotic missions, and remote sensing identification of target destinations with high scientific value.\",\"PeriodicalId\":50048,\"journal\":{\"name\":\"Journal of Spacecraft and Rockets\",\"volume\":\"24 36\",\"pages\":\"\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-12-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Spacecraft and Rockets\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.2514/1.a35767\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, AEROSPACE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Spacecraft and Rockets","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.2514/1.a35767","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, AEROSPACE","Score":null,"Total":0}
CLOVER: Contrastive Learning for Onboard Vision-Enabled Robotics
Current deep-learning models employed by the planetary science community are constrained by a dearth of annotated training data for planetary images. Current models also frequently suffer from inductive bias due to domain shifts when using the same model on data obtained from different spacecraft or different time periods. Moreover, power and compute constraints preclude state-of-the-art vision models from being implemented on robotic spacecraft. In this research, we propose a self-supervised learning (SSL) framework that leverages contrastive learning techniques to improve upon state-of-the-art performance on several published Mars computer vision benchmarks. Our SSL framework enables models to be trained using fewer labels, generalize well to different tasks, and achieve higher computational efficiency. Results on published Mars computer vision benchmarks show that contrastive pretraining outperforms plain supervised learning by 2–10%. We further investigate the importance of dataset heterogeneity in mixed-domain contrastive pretraining. Using self-supervised distillation, we were also able to train a compact ResNet-18 student model to achieve better accuracy than its ResNet-152 teacher model while having 5.2 times fewer parameters. We expect that these SSL techniques will be relevant to the planning of future robotic missions, and remote sensing identification of target destinations with high scientific value.
期刊介绍:
This Journal, that started it all back in 1963, is devoted to the advancement of the science and technology of astronautics and aeronautics through the dissemination of original archival research papers disclosing new theoretical developments and/or experimental result. The topics include aeroacoustics, aerodynamics, combustion, fundamentals of propulsion, fluid mechanics and reacting flows, fundamental aspects of the aerospace environment, hydrodynamics, lasers and associated phenomena, plasmas, research instrumentation and facilities, structural mechanics and materials, optimization, and thermomechanics and thermochemistry. Papers also are sought which review in an intensive manner the results of recent research developments on any of the topics listed above.