自主自适应智能和人工通用智能的神经网络模型:我们的大脑如何学习大型语言模型及其含义。

IF 3.5 4区 医学 Q2 NEUROSCIENCES
Frontiers in Systems Neuroscience Pub Date : 2025-07-30 eCollection Date: 2025-01-01 DOI:10.3389/fnsys.2025.1630151
Stephen Grossberg
{"title":"自主自适应智能和人工通用智能的神经网络模型:我们的大脑如何学习大型语言模型及其含义。","authors":"Stephen Grossberg","doi":"10.3389/fnsys.2025.1630151","DOIUrl":null,"url":null,"abstract":"<p><p>This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization, and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances earlier results showing how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then, bi-directional associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and the descriptive language utterances associated with them. Adaptive resonance theory circuits control model learning and self-stabilizing memory. These human capabilities are not found in AI models such as ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and leading models, including adaptive resonance, deep learning, LLMs, and transformers.</p>","PeriodicalId":12649,"journal":{"name":"Frontiers in Systems Neuroscience","volume":"19 ","pages":"1630151"},"PeriodicalIF":3.5000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343567/pdf/","citationCount":"0","resultStr":"{\"title\":\"Neural network models of autonomous adaptive intelligence and artificial general intelligence: how our brains learn large language models and their meanings.\",\"authors\":\"Stephen Grossberg\",\"doi\":\"10.3389/fnsys.2025.1630151\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization, and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances earlier results showing how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then, bi-directional associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and the descriptive language utterances associated with them. Adaptive resonance theory circuits control model learning and self-stabilizing memory. These human capabilities are not found in AI models such as ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and leading models, including adaptive resonance, deep learning, LLMs, and transformers.</p>\",\"PeriodicalId\":12649,\"journal\":{\"name\":\"Frontiers in Systems Neuroscience\",\"volume\":\"19 \",\"pages\":\"1630151\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343567/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Systems Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3389/fnsys.2025.1630151\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Systems Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fnsys.2025.1630151","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

本文描述了一个生物神经网络模型,该模型解释了人类如何学习理解大型语言模型及其含义。这种学习通常发生在学生向老师学习他们共同经历的事件时。涉及多种类型的自组织大脑过程,包括内容寻址记忆;有意识的视觉知觉;共同关注;对象学习、分类和认知;有意识的识别;认知工作记忆;认知规划;neural-symbolic计算;情感;认知-情绪互动与强化学习;意志;以及目标导向的行动。这篇文章推进了早期的研究结果,展示了如何学习具有感知和情感意义的小语言模型。这篇文章解释了人类及其神经网络模型是如何学会有意识地观看和识别无限数量的视觉场景的。然后,在这些场景、它们所唤起的情感以及与之相关的描述性语言话语之间,可以学习并稳定地记住双向联想链接。自适应共振理论电路控制模型学习和自稳定记忆。这些人类的能力在ChatGPT等人工智能模型中找不到。当前的模型被称为ChatSOME,其中SOME是自组织意义的缩写。本文总结了自20世纪50年代以来神经网络的亮点和主要模型,包括自适应共振、深度学习、llm和变压器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Neural network models of autonomous adaptive intelligence and artificial general intelligence: how our brains learn large language models and their meanings.

This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization, and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances earlier results showing how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then, bi-directional associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and the descriptive language utterances associated with them. Adaptive resonance theory circuits control model learning and self-stabilizing memory. These human capabilities are not found in AI models such as ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and leading models, including adaptive resonance, deep learning, LLMs, and transformers.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in Systems Neuroscience
Frontiers in Systems Neuroscience Neuroscience-Developmental Neuroscience
CiteScore
6.00
自引率
3.30%
发文量
144
审稿时长
14 weeks
期刊介绍: Frontiers in Systems Neuroscience publishes rigorously peer-reviewed research that advances our understanding of whole systems of the brain, including those involved in sensation, movement, learning and memory, attention, reward, decision-making, reasoning, executive functions, and emotions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信