Tracking semantic relatedness: numeral classifiers guide gaze to visual world objects

Marit Lobben, Agata Bochynska, Halvor Eifring, Bruno Laeng
{"title":"Tracking semantic relatedness: numeral classifiers guide gaze to visual world objects","authors":"Marit Lobben, Agata Bochynska, Halvor Eifring, Bruno Laeng","doi":"10.3389/flang.2023.1222982","DOIUrl":null,"url":null,"abstract":"Directing visual attention toward items mentioned within utterances can optimize understanding the unfolding spoken language and preparing appropriate behaviors. In several languages, numeral classifiers specify semantic classes of nouns but can also function as reference trackers. Whereas all classifier types function to single out objects for reference in the real world and may assist attentional guidance, we propose that only sortal classifiers efficiently guide visual attention by being inherently attached to the nouns' semantics, since container classifiers are pragmatically attached to the nouns they classify, and the default classifiers index a noun without specifying the semantics. By contrast, container classifiers are pragmatically attached, and default classifiers index a noun without specifying the semantics. Using eye tracking and the “visual world paradigm”, we had Chinese speakers (N = 20) listen to sentences and we observed that they looked spontaneously within 150 ms after offset of the Sortal classifier. After about 200 ms the same occurred for the container classifiers, but with the default classifier only after about 700 ms. This looking pattern was absent in a control group of non-Chinese speakers and the Chinese speakers' gaze behavior can therefore only be ascribed to classifier semantics and not to artifacts of the visual objects. Thus, we found that classifier types affect the rapidity of spontaneously looking at the target objects on a screen. These significantly different latencies indicate that the stronger the semantic relatedness between a classifier and its noun, the more efficient the deployment of overt attention.","PeriodicalId":350337,"journal":{"name":"Frontiers in Language Sciences","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Language Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/flang.2023.1222982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Directing visual attention toward items mentioned within utterances can optimize understanding the unfolding spoken language and preparing appropriate behaviors. In several languages, numeral classifiers specify semantic classes of nouns but can also function as reference trackers. Whereas all classifier types function to single out objects for reference in the real world and may assist attentional guidance, we propose that only sortal classifiers efficiently guide visual attention by being inherently attached to the nouns' semantics, since container classifiers are pragmatically attached to the nouns they classify, and the default classifiers index a noun without specifying the semantics. By contrast, container classifiers are pragmatically attached, and default classifiers index a noun without specifying the semantics. Using eye tracking and the “visual world paradigm”, we had Chinese speakers (N = 20) listen to sentences and we observed that they looked spontaneously within 150 ms after offset of the Sortal classifier. After about 200 ms the same occurred for the container classifiers, but with the default classifier only after about 700 ms. This looking pattern was absent in a control group of non-Chinese speakers and the Chinese speakers' gaze behavior can therefore only be ascribed to classifier semantics and not to artifacts of the visual objects. Thus, we found that classifier types affect the rapidity of spontaneously looking at the target objects on a screen. These significantly different latencies indicate that the stronger the semantic relatedness between a classifier and its noun, the more efficient the deployment of overt attention.
跟踪语义相关性:数字分类器引导视线到视觉世界对象
将视觉注意力引导到话语中提到的项目可以优化理解正在展开的口语并准备适当的行为。在一些语言中,数字分类器指定名词的语义类,但也可以用作引用跟踪器。尽管所有分类器类型的功能都是在现实世界中挑出对象供参考,并可能有助于注意力引导,但我们提出,只有排序分类器通过固有地依附于名词的语义来有效地引导视觉注意力,因为容器分类器在语用上依附于它们分类的名词,而默认分类器在不指定语义的情况下索引一个名词。相比之下,容器分类器是实际附加的,默认分类器索引名词而不指定语义。使用眼动追踪和“视觉世界范式”,我们让20名中国说话者听句子,我们观察到他们在Sortal分类器偏移后150毫秒内自发地看。在大约200毫秒后,容器分类器也出现了同样的情况,但是默认分类器只在大约700毫秒后出现。这种注视模式在非汉语说话者的对照组中不存在,因此汉语说话者的注视行为只能归因于分类器语义,而不是视觉对象的人工制品。因此,我们发现分类器类型会影响自发地查看屏幕上目标物体的速度。这些显著不同的延迟表明,分类器与其名词之间的语义相关性越强,公开注意的部署越有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信