基于瞬态权值变化的关联网络的检索泛化

Kevin D. Shabahang, H. Yim, S. Dennis
{"title":"基于瞬态权值变化的关联网络的检索泛化","authors":"Kevin D. Shabahang, H. Yim, S. Dennis","doi":"10.31234/osf.io/3nzgh","DOIUrl":null,"url":null,"abstract":"Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"6 1","pages":"124-155"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Generalization at Retrieval Using Associative Networks with Transient Weight Changes\",\"authors\":\"Kevin D. Shabahang, H. Yim, S. Dennis\",\"doi\":\"10.31234/osf.io/3nzgh\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.\",\"PeriodicalId\":72660,\"journal\":{\"name\":\"Computational brain & behavior\",\"volume\":\"6 1\",\"pages\":\"124-155\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational brain & behavior\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.31234/osf.io/3nzgh\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational brain & behavior","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31234/osf.io/3nzgh","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

即使没有见过像“她的水牛”这样的双字母组合,你也可以很容易地看出它是一致的,因为“水牛”可以与更常见的名词,如“猫”或“狗”排列在一起,这些名词在“她的猫”或“她的狗”等语境中出现过——这种新的双字母组合在结构上与记忆中的表征一致。我们提出了一类新的关联网络,我们称之为动态特征网络,并提供了模拟,展示了它们如何推广到与训练域结构一致的模式。无论输入是什么,线性关联网络都以相同的模式响应,从而激发了饱和度的引入,以促进其他响应状态。然而,使用饱和的模型不能很容易地推广到新的,但结构一致的模式。动态特征网络通过使用临时权重变化动态地使特征谱向外部输入偏置来解决这个问题。我们展示了在文本语料库上训练的双槽动态特征网络如何提供双语法判断和词汇决策任务,表明与脑状态-盒子和线性关联网络相比,它可以更好地从语料库中捕获语法规律。我们以模拟结束,展示了Dynamic-Eigen-Net如何对双元图中引入的语法违规敏感,即使在编码这些双元图的关联从内存中删除之后也是如此。在所有的模拟中,动态特征网络可靠地优于大脑状态-盒子和线性关联网络。我们提出动态特征网络作为联想网络,在检索时进行泛化,而不是通过循环反馈进行编码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Generalization at Retrieval Using Associative Networks with Transient Weight Changes
Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.30
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信