Guillaume Pourcel, Mirko Goldmann, Ingo Fischer, Miguel C. Soriano
{"title":"利用概念器对递归神经网络进行自适应控制","authors":"Guillaume Pourcel, Mirko Goldmann, Ingo Fischer, Miguel C. Soriano","doi":"arxiv-2405.07236","DOIUrl":null,"url":null,"abstract":"Recurrent Neural Networks excel at predicting and generating complex\nhigh-dimensional temporal patterns. Due to their inherent nonlinear dynamics\nand memory, they can learn unbounded temporal dependencies from data. In a\nMachine Learning setting, the network's parameters are adapted during a\ntraining phase to match the requirements of a given task/problem increasing its\ncomputational capabilities. After the training, the network parameters are kept\nfixed to exploit the learned computations. The static parameters thereby render\nthe network unadaptive to changing conditions, such as external or internal\nperturbation. In this manuscript, we demonstrate how keeping parts of the\nnetwork adaptive even after the training enhances its functionality and\nrobustness. Here, we utilize the conceptor framework and conceptualize an\nadaptive control loop analyzing the network's behavior continuously and\nadjusting its time-varying internal representation to follow a desired target.\nWe demonstrate how the added adaptivity of the network supports the\ncomputational functionality in three distinct tasks: interpolation of temporal\npatterns, stabilization against partial network degradation, and robustness\nagainst input distortion. Our results highlight the potential of adaptive\nnetworks in machine learning beyond training, enabling them to not only learn\ncomplex patterns but also dynamically adjust to changing environments,\nultimately broadening their applicability.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive control of recurrent neural networks using conceptors\",\"authors\":\"Guillaume Pourcel, Mirko Goldmann, Ingo Fischer, Miguel C. Soriano\",\"doi\":\"arxiv-2405.07236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recurrent Neural Networks excel at predicting and generating complex\\nhigh-dimensional temporal patterns. Due to their inherent nonlinear dynamics\\nand memory, they can learn unbounded temporal dependencies from data. In a\\nMachine Learning setting, the network's parameters are adapted during a\\ntraining phase to match the requirements of a given task/problem increasing its\\ncomputational capabilities. After the training, the network parameters are kept\\nfixed to exploit the learned computations. The static parameters thereby render\\nthe network unadaptive to changing conditions, such as external or internal\\nperturbation. In this manuscript, we demonstrate how keeping parts of the\\nnetwork adaptive even after the training enhances its functionality and\\nrobustness. Here, we utilize the conceptor framework and conceptualize an\\nadaptive control loop analyzing the network's behavior continuously and\\nadjusting its time-varying internal representation to follow a desired target.\\nWe demonstrate how the added adaptivity of the network supports the\\ncomputational functionality in three distinct tasks: interpolation of temporal\\npatterns, stabilization against partial network degradation, and robustness\\nagainst input distortion. Our results highlight the potential of adaptive\\nnetworks in machine learning beyond training, enabling them to not only learn\\ncomplex patterns but also dynamically adjust to changing environments,\\nultimately broadening their applicability.\",\"PeriodicalId\":501305,\"journal\":{\"name\":\"arXiv - PHYS - Adaptation and Self-Organizing Systems\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Adaptation and Self-Organizing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.07236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Adaptation and Self-Organizing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.07236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive control of recurrent neural networks using conceptors
Recurrent Neural Networks excel at predicting and generating complex
high-dimensional temporal patterns. Due to their inherent nonlinear dynamics
and memory, they can learn unbounded temporal dependencies from data. In a
Machine Learning setting, the network's parameters are adapted during a
training phase to match the requirements of a given task/problem increasing its
computational capabilities. After the training, the network parameters are kept
fixed to exploit the learned computations. The static parameters thereby render
the network unadaptive to changing conditions, such as external or internal
perturbation. In this manuscript, we demonstrate how keeping parts of the
network adaptive even after the training enhances its functionality and
robustness. Here, we utilize the conceptor framework and conceptualize an
adaptive control loop analyzing the network's behavior continuously and
adjusting its time-varying internal representation to follow a desired target.
We demonstrate how the added adaptivity of the network supports the
computational functionality in three distinct tasks: interpolation of temporal
patterns, stabilization against partial network degradation, and robustness
against input distortion. Our results highlight the potential of adaptive
networks in machine learning beyond training, enabling them to not only learn
complex patterns but also dynamically adjust to changing environments,
ultimately broadening their applicability.