{"title":"Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy.","authors":"T Ed Li, Mufeng Tang, Rafal Bogacz","doi":"10.1162/neco_a_01769","DOIUrl":null,"url":null,"abstract":"<p><p>Novelty detection, also known as familiarity discrimination or recognition memory, refers to the ability to distinguish whether a stimulus has been seen before. It has been hypothesized that novelty detection can naturally arise within networks that store memory or learn efficient neural representation because these networks already store information on familiar stimuli. However, existing computational models supporting this idea have yet to reproduce the high capacity of human recognition memory, leaving the hypothesis in question. This article demonstrates that predictive coding, an established model previously shown to effectively support representation learning and memory, can also naturally discriminate novelty with high capacity. The predictive coding model includes neurons encoding prediction errors, and we show that these neurons produce higher activity for novel stimuli, so that the novelty can be decoded from their activity. Additionally, hierarchical predictive coding networks detect novelty at different levels of abstraction within the hierarchy, from low-level sensory features like arrangements of pixels to high-level semantic features like object identities. Overall, based on predictive coding, this article establishes a unified framework that brings together novelty detection, associative memory, and representation learning, demonstrating that a single model can capture these various cognitive functions.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1373-1408"},"PeriodicalIF":2.7000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/neco_a_01769","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Novelty detection, also known as familiarity discrimination or recognition memory, refers to the ability to distinguish whether a stimulus has been seen before. It has been hypothesized that novelty detection can naturally arise within networks that store memory or learn efficient neural representation because these networks already store information on familiar stimuli. However, existing computational models supporting this idea have yet to reproduce the high capacity of human recognition memory, leaving the hypothesis in question. This article demonstrates that predictive coding, an established model previously shown to effectively support representation learning and memory, can also naturally discriminate novelty with high capacity. The predictive coding model includes neurons encoding prediction errors, and we show that these neurons produce higher activity for novel stimuli, so that the novelty can be decoded from their activity. Additionally, hierarchical predictive coding networks detect novelty at different levels of abstraction within the hierarchy, from low-level sensory features like arrangements of pixels to high-level semantic features like object identities. Overall, based on predictive coding, this article establishes a unified framework that brings together novelty detection, associative memory, and representation learning, demonstrating that a single model can capture these various cognitive functions.
期刊介绍:
Neural Computation is uniquely positioned at the crossroads between neuroscience and TMCS and welcomes the submission of original papers from all areas of TMCS, including: Advanced experimental design; Analysis of chemical sensor data; Connectomic reconstructions; Analysis of multielectrode and optical recordings; Genetic data for cell identity; Analysis of behavioral data; Multiscale models; Analysis of molecular mechanisms; Neuroinformatics; Analysis of brain imaging data; Neuromorphic engineering; Principles of neural coding, computation, circuit dynamics, and plasticity; Theories of brain function.