Lucas Haraped, D. Jacob Gerlofs, Olive Chung-Hui Huang, Cam Hickling, Walter F. Bischof, Pierre Sachse, Alan Kingstone
{"title":"在面对面合作中协调注意力:凝视、指向和言语参考的动态","authors":"Lucas Haraped, D. Jacob Gerlofs, Olive Chung-Hui Huang, Cam Hickling, Walter F. Bischof, Pierre Sachse, Alan Kingstone","doi":"10.1111/cogs.70123","DOIUrl":null,"url":null,"abstract":"<p>During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (<i>n</i> = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"49 10","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70123","citationCount":"0","resultStr":"{\"title\":\"Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference\",\"authors\":\"Lucas Haraped, D. Jacob Gerlofs, Olive Chung-Hui Huang, Cam Hickling, Walter F. Bischof, Pierre Sachse, Alan Kingstone\",\"doi\":\"10.1111/cogs.70123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (<i>n</i> = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.</p>\",\"PeriodicalId\":48349,\"journal\":{\"name\":\"Cognitive Science\",\"volume\":\"49 10\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70123\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70123\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cogs.70123","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference
During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (n = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.
期刊介绍:
Cognitive Science publishes articles in all areas of cognitive science, covering such topics as knowledge representation, inference, memory processes, learning, problem solving, planning, perception, natural language understanding, connectionism, brain theory, motor control, intentional systems, and other areas of interdisciplinary concern. Highest priority is given to research reports that are specifically written for a multidisciplinary audience. The audience is primarily researchers in cognitive science and its associated fields, including anthropologists, education researchers, psychologists, philosophers, linguists, computer scientists, neuroscientists, and roboticists.