用于少量概念学习的贝叶斯逆向图形

Octavio Arriaga, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner
{"title":"用于少量概念学习的贝叶斯逆向图形","authors":"Octavio Arriaga, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner","doi":"arxiv-2409.08351","DOIUrl":null,"url":null,"abstract":"Humans excel at building generalizations of new concepts from just one single\nexample. Contrary to this, current computer vision models typically require\nlarge amount of training samples to achieve a comparable accuracy. In this work\nwe present a Bayesian model of perception that learns using only minimal data,\na prototypical probabilistic program of an object. Specifically, we propose a\ngenerative inverse graphics model of primitive shapes, to infer posterior\ndistributions over physically consistent parameters from one or several images.\nWe show how this representation can be used for downstream tasks such as\nfew-shot classification and pose estimation. Our model outperforms existing\nfew-shot neural-only classification algorithms and demonstrates generalization\nacross varying lighting conditions, backgrounds, and out-of-distribution\nshapes. By design, our model is uncertainty-aware and uses our new\ndifferentiable renderer for optimizing global scene parameters through gradient\ndescent, sampling posterior distributions over object parameters with Markov\nChain Monte Carlo (MCMC), and using a neural based likelihood function.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bayesian Inverse Graphics for Few-Shot Concept Learning\",\"authors\":\"Octavio Arriaga, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner\",\"doi\":\"arxiv-2409.08351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Humans excel at building generalizations of new concepts from just one single\\nexample. Contrary to this, current computer vision models typically require\\nlarge amount of training samples to achieve a comparable accuracy. In this work\\nwe present a Bayesian model of perception that learns using only minimal data,\\na prototypical probabilistic program of an object. Specifically, we propose a\\ngenerative inverse graphics model of primitive shapes, to infer posterior\\ndistributions over physically consistent parameters from one or several images.\\nWe show how this representation can be used for downstream tasks such as\\nfew-shot classification and pose estimation. Our model outperforms existing\\nfew-shot neural-only classification algorithms and demonstrates generalization\\nacross varying lighting conditions, backgrounds, and out-of-distribution\\nshapes. By design, our model is uncertainty-aware and uses our new\\ndifferentiable renderer for optimizing global scene parameters through gradient\\ndescent, sampling posterior distributions over object parameters with Markov\\nChain Monte Carlo (MCMC), and using a neural based likelihood function.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08351\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08351","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人类擅长从单个样本中归纳出新概念。与此相反,目前的计算机视觉模型通常需要大量的训练样本才能达到相当的准确度。在这项研究中,我们提出了一种贝叶斯感知模型,该模型只需使用极少量的数据(物体的原型概率程序)即可学习。具体来说,我们提出了一个原始形状的生成逆图形模型,从一张或多张图像中推断出物理上一致的参数的后分布。我们的模型优于现有的仅有少量镜头的神经分类算法,并展示了在不同光照条件、背景和分布外形状下的泛化能力。在设计上,我们的模型具有不确定性感知能力,并使用我们新的可微分渲染器,通过梯度下降优化全局场景参数,使用马尔可夫链蒙特卡罗(MCMC)对对象参数的后验分布进行采样,并使用基于神经的似然函数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Bayesian Inverse Graphics for Few-Shot Concept Learning
Humans excel at building generalizations of new concepts from just one single example. Contrary to this, current computer vision models typically require large amount of training samples to achieve a comparable accuracy. In this work we present a Bayesian model of perception that learns using only minimal data, a prototypical probabilistic program of an object. Specifically, we propose a generative inverse graphics model of primitive shapes, to infer posterior distributions over physically consistent parameters from one or several images. We show how this representation can be used for downstream tasks such as few-shot classification and pose estimation. Our model outperforms existing few-shot neural-only classification algorithms and demonstrates generalization across varying lighting conditions, backgrounds, and out-of-distribution shapes. By design, our model is uncertainty-aware and uses our new differentiable renderer for optimizing global scene parameters through gradient descent, sampling posterior distributions over object parameters with Markov Chain Monte Carlo (MCMC), and using a neural based likelihood function.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信