{"title":"Confabulation dynamics in a reservoir computer: Filling in the gaps with untrained attractors.","authors":"Jack O'Hagan, Andrew Keane, Andrew Flynn","doi":"10.1063/5.0283285","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence has advanced significantly in recent years, thanks to innovations in the design and training of artificial neural networks (ANNs). Despite these advancements, we still understand relatively little about how elementary forms of ANNs learn, fail to learn, and generate false information without the intent to deceive, a phenomenon known as \"confabulation.\" To provide some foundational insight, in this paper, we analyze how confabulation occurs in reservoir computers (RCs): a dynamical system in the form of an ANN. RCs are particularly useful to study as they are known to confabulate in a well-defined way: when RCs are trained to reconstruct the dynamics of a given attractor, they sometimes construct an attractor that they were not trained to construct, a so-called \"untrained attractor\" (UA). This paper sheds light on the role played by UAs when reconstruction fails and their influence when modeling transitions between reconstructed attractors. Based on our results, we conclude that UAs are an intrinsic feature of learning systems whose state spaces are bounded and that this means of confabulation may be present in systems beyond RCs.</p>","PeriodicalId":9974,"journal":{"name":"Chaos","volume":"35 9","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1063/5.0283285","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence has advanced significantly in recent years, thanks to innovations in the design and training of artificial neural networks (ANNs). Despite these advancements, we still understand relatively little about how elementary forms of ANNs learn, fail to learn, and generate false information without the intent to deceive, a phenomenon known as "confabulation." To provide some foundational insight, in this paper, we analyze how confabulation occurs in reservoir computers (RCs): a dynamical system in the form of an ANN. RCs are particularly useful to study as they are known to confabulate in a well-defined way: when RCs are trained to reconstruct the dynamics of a given attractor, they sometimes construct an attractor that they were not trained to construct, a so-called "untrained attractor" (UA). This paper sheds light on the role played by UAs when reconstruction fails and their influence when modeling transitions between reconstructed attractors. Based on our results, we conclude that UAs are an intrinsic feature of learning systems whose state spaces are bounded and that this means of confabulation may be present in systems beyond RCs.
期刊介绍:
Chaos: An Interdisciplinary Journal of Nonlinear Science is a peer-reviewed journal devoted to increasing the understanding of nonlinear phenomena and describing the manifestations in a manner comprehensible to researchers from a broad spectrum of disciplines.