Ali Hummos, Felipe del Río, Brabeeba Mien Wang, Julio Hurtado, Cristian B. Calderon, Guangyu Robert Yang
{"title":"Gradient-based inference of abstract task representations for generalization in neural networks","authors":"Ali Hummos, Felipe del Río, Brabeeba Mien Wang, Julio Hurtado, Cristian B. Calderon, Guangyu Robert Yang","doi":"arxiv-2407.17356","DOIUrl":null,"url":null,"abstract":"Humans and many animals show remarkably adaptive behavior and can respond\ndifferently to the same input depending on their internal goals. The brain not\nonly represents the intermediate abstractions needed to perform a computation\nbut also actively maintains a representation of the computation itself (task\nabstraction). Such separation of the computation and its abstraction is\nassociated with faster learning, flexible decision-making, and broad\ngeneralization capacity. We investigate if such benefits might extend to neural\nnetworks trained with task abstractions. For such benefits to emerge, one needs\na task inference mechanism that possesses two crucial abilities: First, the\nability to infer abstract task representations when no longer explicitly\nprovided (task inference), and second, manipulate task representations to adapt\nto novel problems (task recomposition). To tackle this, we cast task inference\nas an optimization problem from a variational inference perspective and ground\nour approach in an expectation-maximization framework. We show that gradients\nbackpropagated through a neural network to a task representation layer are an\nefficient heuristic to infer current task demands, a process we refer to as\ngradient-based inference (GBI). Further iterative optimization of the task\nrepresentation layer allows for recomposing abstractions to adapt to novel\nsituations. Using a toy example, a novel image classifier, and a language\nmodel, we demonstrate that GBI provides higher learning efficiency and\ngeneralization to novel tasks and limits forgetting. Moreover, we show that GBI\nhas unique advantages such as preserving information for uncertainty estimation\nand detecting out-of-distribution samples.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"161 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.17356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Humans and many animals show remarkably adaptive behavior and can respond
differently to the same input depending on their internal goals. The brain not
only represents the intermediate abstractions needed to perform a computation
but also actively maintains a representation of the computation itself (task
abstraction). Such separation of the computation and its abstraction is
associated with faster learning, flexible decision-making, and broad
generalization capacity. We investigate if such benefits might extend to neural
networks trained with task abstractions. For such benefits to emerge, one needs
a task inference mechanism that possesses two crucial abilities: First, the
ability to infer abstract task representations when no longer explicitly
provided (task inference), and second, manipulate task representations to adapt
to novel problems (task recomposition). To tackle this, we cast task inference
as an optimization problem from a variational inference perspective and ground
our approach in an expectation-maximization framework. We show that gradients
backpropagated through a neural network to a task representation layer are an
efficient heuristic to infer current task demands, a process we refer to as
gradient-based inference (GBI). Further iterative optimization of the task
representation layer allows for recomposing abstractions to adapt to novel
situations. Using a toy example, a novel image classifier, and a language
model, we demonstrate that GBI provides higher learning efficiency and
generalization to novel tasks and limits forgetting. Moreover, we show that GBI
has unique advantages such as preserving information for uncertainty estimation
and detecting out-of-distribution samples.