Simple Recurrent Networks are Interactive.

IF 3.2 3区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
James S Magnuson, Sahil Luthra
{"title":"Simple Recurrent Networks are Interactive.","authors":"James S Magnuson, Sahil Luthra","doi":"10.3758/s13423-024-02608-y","DOIUrl":null,"url":null,"abstract":"<p><p>There is disagreement among cognitive scientists as to whether a key computational framework - the Simple Recurrent Network (SRN; Elman, Machine Learning, 7(2), 195-225, 1991; Elman, Cognitive Science, 14(2), 179-211, 1990) - is a feedforward system. SRNs have been essential tools in advancing theories of learning, development, and processing in cognitive science for more than three decades. If SRNs were feedforward systems, there would be pervasive theoretical implications: Anything an SRN can do would therefore be explainable without interaction (feedback). However, despite claims that SRNs (and by extension recurrent neural networks more generally) are feedforward (Norris, 1993), this is not the case. Feedforward networks by definition are acyclic graphs - they contain no loops. SRNs contain loops - from hidden units back to hidden units with a time delay - and are therefore cyclic graphs. As we demonstrate, they are interactive in the sense normally implied for networks with feedback connections between layers: In an SRN, bottom-up inputs are inextricably mixed with previous model-internal computations. Inputs are transmitted to hidden units by multiplying them by input-to-hidden weights. However, hidden units simultaneously receive their own previous activations as input via hidden-to-hidden connections with a one-step time delay (typically via context units). These are added to the input-to-hidden values, and the sums are transformed by an activation function. Thus, bottom-up inputs are mixed with the products of potentially many preceding transformations of inputs and model-internal states. We discuss theoretical implications through a key example from psycholinguistics where the status of SRNs as feedforward or interactive has crucial ramifications.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychonomic Bulletin & Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13423-024-02608-y","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

There is disagreement among cognitive scientists as to whether a key computational framework - the Simple Recurrent Network (SRN; Elman, Machine Learning, 7(2), 195-225, 1991; Elman, Cognitive Science, 14(2), 179-211, 1990) - is a feedforward system. SRNs have been essential tools in advancing theories of learning, development, and processing in cognitive science for more than three decades. If SRNs were feedforward systems, there would be pervasive theoretical implications: Anything an SRN can do would therefore be explainable without interaction (feedback). However, despite claims that SRNs (and by extension recurrent neural networks more generally) are feedforward (Norris, 1993), this is not the case. Feedforward networks by definition are acyclic graphs - they contain no loops. SRNs contain loops - from hidden units back to hidden units with a time delay - and are therefore cyclic graphs. As we demonstrate, they are interactive in the sense normally implied for networks with feedback connections between layers: In an SRN, bottom-up inputs are inextricably mixed with previous model-internal computations. Inputs are transmitted to hidden units by multiplying them by input-to-hidden weights. However, hidden units simultaneously receive their own previous activations as input via hidden-to-hidden connections with a one-step time delay (typically via context units). These are added to the input-to-hidden values, and the sums are transformed by an activation function. Thus, bottom-up inputs are mixed with the products of potentially many preceding transformations of inputs and model-internal states. We discuss theoretical implications through a key example from psycholinguistics where the status of SRNs as feedforward or interactive has crucial ramifications.

简单循环网络具有交互性。
认知科学家对一个关键的计算框架--简单递归网络(SRN;Elman,《机器学习》,7(2), 195-225, 1991 年;Elman,《认知科学》,14(2), 179-211, 1990 年)--是否是前馈系统存在分歧。30 多年来,SRN 一直是推动认知科学学习、发展和处理理论的重要工具。如果 SRN 是前馈系统,就会产生广泛的理论影响:因此,SRN 的任何行为都可以在没有互动(反馈)的情况下得到解释。然而,尽管有人声称 SRN(推而广之是递归神经网络)是前馈的(Norris,1993 年),但事实并非如此。根据定义,前馈网络是非循环图--它们不包含回路。递归神经网络包含循环--从隐藏单元回到隐藏单元,有时间延迟--因此是循环图。正如我们所演示的,它们是交互式的,这通常是指层与层之间有反馈连接的网络:在 SRN 中,自下而上的输入与之前的模型内部计算密不可分。输入通过乘以输入-隐藏权重传递给隐藏单元。然而,隐单元同时也会通过具有一步时间延迟的隐-隐连接(通常是通过上下文单元)接收自己之前的激活作为输入。这些值与输入到隐藏的值相加,然后通过激活函数对总和进行转换。因此,自下而上的输入与之前可能多次输入转化和模型内部状态的乘积混合在一起。我们将通过心理语言学中的一个关键实例来讨论其理论意义,SRN 的前馈或互动地位在其中具有至关重要的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.70
自引率
2.90%
发文量
165
期刊介绍: The journal provides coverage spanning a broad spectrum of topics in all areas of experimental psychology. The journal is primarily dedicated to the publication of theory and review articles and brief reports of outstanding experimental work. Areas of coverage include cognitive psychology broadly construed, including but not limited to action, perception, & attention, language, learning & memory, reasoning & decision making, and social cognition. We welcome submissions that approach these issues from a variety of perspectives such as behavioral measurements, comparative psychology, development, evolutionary psychology, genetics, neuroscience, and quantitative/computational modeling. We particularly encourage integrative research that crosses traditional content and methodological boundaries.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信