Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference

IF 2.4 2区 心理学 Q2 PSYCHOLOGY, EXPERIMENTAL
Lucas Haraped, D. Jacob Gerlofs, Olive Chung-Hui Huang, Cam Hickling, Walter F. Bischof, Pierre Sachse, Alan Kingstone
{"title":"Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference","authors":"Lucas Haraped,&nbsp;D. Jacob Gerlofs,&nbsp;Olive Chung-Hui Huang,&nbsp;Cam Hickling,&nbsp;Walter F. Bischof,&nbsp;Pierre Sachse,&nbsp;Alan Kingstone","doi":"10.1111/cogs.70123","DOIUrl":null,"url":null,"abstract":"<p>During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (<i>n</i> = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70123","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70123","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (n = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.

Abstract Image

在面对面合作中协调注意力:凝视、指向和言语参考的动态
在现实世界的互动中,人们依靠凝视、手势和言语来协调注意力,建立共同的理解。然而,目前尚不清楚这些模式是否以及如何在面对面的互动环境中相互作用。目前的研究通过分析二元面对面的互动来解决这个问题,参与者(n = 52)在他们的凝视、指向手势和口头引用被记录的同时,共同对画作进行排名。通过交叉循环量化分析,我们发现参与者很容易使用指向的手势来补充凝视和口头参考线索,并且指向伴侣的凝视遵循规范的对话模式,即在听时比在说时更多地看对方的脸。此外,凝视、指向和言语暗示在个体内部和个体之间都显示出显著的耦合,指向的手势和言语暗示引导伴侣的目光转向共同的目标,说话者的目光引导听者的目光。此外,与单独指向相比,同时指向和言语引用导致了更持久的注意耦合。这些发现强调了联合注意协调的多模态本质,通过展示凝视、手势和语言如何动态地整合到一个共享的认知系统中,扩展了具身、互动认知的理论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Science
Cognitive Science PSYCHOLOGY, EXPERIMENTAL-
CiteScore
4.10
自引率
8.00%
发文量
139
期刊介绍: Cognitive Science publishes articles in all areas of cognitive science, covering such topics as knowledge representation, inference, memory processes, learning, problem solving, planning, perception, natural language understanding, connectionism, brain theory, motor control, intentional systems, and other areas of interdisciplinary concern. Highest priority is given to research reports that are specifically written for a multidisciplinary audience. The audience is primarily researchers in cognitive science and its associated fields, including anthropologists, education researchers, psychologists, philosophers, linguists, computer scientists, neuroscientists, and roboticists.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信