Scale-Dependent Relationships in Natural Language.

Computational brain & behavior Pub Date : 2021-06-01 Epub Date: 2021-01-04 DOI:10.1007/s42113-020-00094-8
Aakash Sarkar, Marc W Howard
{"title":"Scale-Dependent Relationships in Natural Language.","authors":"Aakash Sarkar,&nbsp;Marc W Howard","doi":"10.1007/s42113-020-00094-8","DOIUrl":null,"url":null,"abstract":"<p><p>Language, like other natural sequences, exhibits statistical dependencies at a wide range of scales (Lin & Tegmark, 2016). However, many statistical learning models applied to language impose a sampling scale while extracting statistical structure. For instance, Word2Vec creates vector embeddings by sampling context in a window around each word, the size of which defines a strong scale; relationships over much larger temporal scales would be invisible to the algorithm. This paper examines the family of Word2Vec embeddings generated while systematically manipulating the size of the context window. The primary result is that different linguistic relationships are preferentially encoded at different scales. Different scales emphasize different syntactic and semantic relations between words, as assessed both by analogical reasoning tasks in the Google Analogies test set and human similarity rating datasets WordSim-353 and SimLex-999. Moreover, the neighborhoods of a given word in the embeddings change considerably depending on the scale. These results suggest that sampling at any individual scale can only identify a subset of the meaningful relationships a word might have, and point toward the importance of developing scale-free models of semantic meaning.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"4 ","pages":"164-177"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s42113-020-00094-8","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational brain & behavior","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s42113-020-00094-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/1/4 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Language, like other natural sequences, exhibits statistical dependencies at a wide range of scales (Lin & Tegmark, 2016). However, many statistical learning models applied to language impose a sampling scale while extracting statistical structure. For instance, Word2Vec creates vector embeddings by sampling context in a window around each word, the size of which defines a strong scale; relationships over much larger temporal scales would be invisible to the algorithm. This paper examines the family of Word2Vec embeddings generated while systematically manipulating the size of the context window. The primary result is that different linguistic relationships are preferentially encoded at different scales. Different scales emphasize different syntactic and semantic relations between words, as assessed both by analogical reasoning tasks in the Google Analogies test set and human similarity rating datasets WordSim-353 and SimLex-999. Moreover, the neighborhoods of a given word in the embeddings change considerably depending on the scale. These results suggest that sampling at any individual scale can only identify a subset of the meaningful relationships a word might have, and point toward the importance of developing scale-free models of semantic meaning.

自然语言中的尺度依赖关系。
与其他自然序列一样,语言在很大程度上表现出统计依赖性(Lin&Tegmark,2016)。然而,许多应用于语言的统计学习模型在提取统计结构时都施加了抽样量表。例如,Word2Vec通过在每个单词周围的窗口中对上下文进行采样来创建向量嵌入,其大小定义了强尺度;对于算法来说,在更大的时间尺度上的关系将是不可见的。本文研究了在系统地操作上下文窗口大小时生成的Word2Verc嵌入家族。主要结果是,不同的语言关系在不同的尺度上被优先编码。不同的量表强调单词之间不同的句法和语义关系,这是通过谷歌类比测试集中的类比推理任务和人类相似性评级数据集WordSim-353和SimLex-99进行评估的。此外,嵌入中给定单词的邻域根据尺度而发生显著变化。这些结果表明,在任何个体尺度上的抽样都只能识别一个词可能具有的有意义关系的子集,并指出了开发无尺度语义模型的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.30
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信