基于视觉的自主转向小样本多模态蒸馏

Yu Shen, Luyu Yang, Xijun Wang, Ming-Chyuan Lin
{"title":"基于视觉的自主转向小样本多模态蒸馏","authors":"Yu Shen, Luyu Yang, Xijun Wang, Ming-Chyuan Lin","doi":"10.1109/ICRA48891.2023.10160803","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel learning framework for autonomous systems that uses a small amount of “auxiliary information” that complements the learning of the main modality, called “small-shot auxiliary modality distillation network (AMD-S-Net)”. The AMD-S-Net contains a two-stream framework design that can fully extract information from different types of data (i.e., paired/unpaired multi-modality data) to distill knowledge more effectively. We also propose a novel training paradigm based on the “reset operation” that enables the teacher to explore the local loss landscape near the student domain iteratively, providing local landscape information and potential directions to discover better solutions by the student, thus achieving higher learning performance. Our experiments show that AMD-S-Net and our training paradigm outperform other SOTA methods by up to 12.7% and 18.1% improvement in autonomous steering, respectively.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Small-shot Multi-modal Distillation for Vision-based Autonomous Steering\",\"authors\":\"Yu Shen, Luyu Yang, Xijun Wang, Ming-Chyuan Lin\",\"doi\":\"10.1109/ICRA48891.2023.10160803\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a novel learning framework for autonomous systems that uses a small amount of “auxiliary information” that complements the learning of the main modality, called “small-shot auxiliary modality distillation network (AMD-S-Net)”. The AMD-S-Net contains a two-stream framework design that can fully extract information from different types of data (i.e., paired/unpaired multi-modality data) to distill knowledge more effectively. We also propose a novel training paradigm based on the “reset operation” that enables the teacher to explore the local loss landscape near the student domain iteratively, providing local landscape information and potential directions to discover better solutions by the student, thus achieving higher learning performance. Our experiments show that AMD-S-Net and our training paradigm outperform other SOTA methods by up to 12.7% and 18.1% improvement in autonomous steering, respectively.\",\"PeriodicalId\":360533,\"journal\":{\"name\":\"2023 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA48891.2023.10160803\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48891.2023.10160803","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们提出了一种新的自主系统学习框架,它使用少量的“辅助信息”来补充主模态的学习,称为“小shot辅助模态蒸馏网络(AMD-S-Net)”。AMD-S-Net包含两流框架设计,可以从不同类型的数据(即成对/非成对的多模态数据)中充分提取信息,从而更有效地提取知识。我们还提出了一种新的基于“重置操作”的训练范式,使教师能够迭代地探索学生领域附近的局部损失景观,为学生提供局部景观信息和潜在的方向,以便学生发现更好的解决方案,从而获得更高的学习绩效。我们的实验表明,AMD-S-Net和我们的训练范式在自动转向方面分别比其他SOTA方法提高了12.7%和18.1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Small-shot Multi-modal Distillation for Vision-based Autonomous Steering
In this paper, we propose a novel learning framework for autonomous systems that uses a small amount of “auxiliary information” that complements the learning of the main modality, called “small-shot auxiliary modality distillation network (AMD-S-Net)”. The AMD-S-Net contains a two-stream framework design that can fully extract information from different types of data (i.e., paired/unpaired multi-modality data) to distill knowledge more effectively. We also propose a novel training paradigm based on the “reset operation” that enables the teacher to explore the local loss landscape near the student domain iteratively, providing local landscape information and potential directions to discover better solutions by the student, thus achieving higher learning performance. Our experiments show that AMD-S-Net and our training paradigm outperform other SOTA methods by up to 12.7% and 18.1% improvement in autonomous steering, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信