Alexia Galati , Rick Dale , Camila Alviar , Moreno I. Coco
{"title":"任务目标制约了人际协调过程中眼球运动和言语的一致性","authors":"Alexia Galati , Rick Dale , Camila Alviar , Moreno I. Coco","doi":"10.1016/j.jml.2025.104691","DOIUrl":null,"url":null,"abstract":"<div><div>Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; <span><span>Pickering & Garrod, 2004</span></span>), support this view by building on tasks that require monitoring a partner’s perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners’ eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"146 ","pages":"Article 104691"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Task goals constrain the alignment in eye-movements and speech during interpersonal coordination\",\"authors\":\"Alexia Galati , Rick Dale , Camila Alviar , Moreno I. Coco\",\"doi\":\"10.1016/j.jml.2025.104691\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; <span><span>Pickering & Garrod, 2004</span></span>), support this view by building on tasks that require monitoring a partner’s perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners’ eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.</div></div>\",\"PeriodicalId\":16493,\"journal\":{\"name\":\"Journal of memory and language\",\"volume\":\"146 \",\"pages\":\"Article 104691\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of memory and language\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0749596X25000841\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of memory and language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0749596X25000841","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
Task goals constrain the alignment in eye-movements and speech during interpersonal coordination
Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering & Garrod, 2004), support this view by building on tasks that require monitoring a partner’s perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a “divide and conquer” strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners’ eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.
期刊介绍:
Articles in the Journal of Memory and Language contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes. Special emphasis is given to research articles that provide new theoretical insights based on a carefully laid empirical foundation. The journal generally favors articles that provide multiple experiments. In addition, significant theoretical papers without new experimental findings may be published.
The Journal of Memory and Language is a valuable tool for cognitive scientists, including psychologists, linguists, and others interested in memory and learning, language, reading, and speech.
Research Areas include:
• Topics that illuminate aspects of memory or language processing
• Linguistics
• Neuropsychology.