{"title":"On robust learning of memory attractors with noisy deep associative memory networks","authors":"Xuan Rao , Bo Zhao , Derong Liu","doi":"10.1016/j.neunet.2025.107474","DOIUrl":null,"url":null,"abstract":"<div><div>Developing the computational mechanism for memory systems is a long-standing focus in machine learning and neuroscience. Recent studies have shown that overparameterized autoencoders (OAEs) implement associative memory (AM) by encoding training data as attractors. However, the learning of memory attractors requires that the norms of all eigenvalues of the input–output Jacobian matrix are strictly less than one. Motivated by the observed strong negative correlation between the attractor robustness and the largest singular value of the Jacobian matrix, we develop the noisy overparameterized autoencoders (NOAEs) for learning robust attractors by injecting random noises into their inputs during the training procedure. Theoretical demonstrations show that the training objective of the NOAE approximately minimizes the upper bound of the weighted sum of the reconstruction error and the square of the largest singular value. Extensive experiments in terms of numerical and image-based datasets show that NOAEs not only increase the success rate of the training samples becoming attractors, but also improve the attractor robustness. Codes are available at <span><span>https://github.com/RaoXuan-1998/neural-netowrk-journal-NOAE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107474"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003533","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Developing the computational mechanism for memory systems is a long-standing focus in machine learning and neuroscience. Recent studies have shown that overparameterized autoencoders (OAEs) implement associative memory (AM) by encoding training data as attractors. However, the learning of memory attractors requires that the norms of all eigenvalues of the input–output Jacobian matrix are strictly less than one. Motivated by the observed strong negative correlation between the attractor robustness and the largest singular value of the Jacobian matrix, we develop the noisy overparameterized autoencoders (NOAEs) for learning robust attractors by injecting random noises into their inputs during the training procedure. Theoretical demonstrations show that the training objective of the NOAE approximately minimizes the upper bound of the weighted sum of the reconstruction error and the square of the largest singular value. Extensive experiments in terms of numerical and image-based datasets show that NOAEs not only increase the success rate of the training samples becoming attractors, but also improve the attractor robustness. Codes are available at https://github.com/RaoXuan-1998/neural-netowrk-journal-NOAE.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.