双人互动中的注视-动作耦合、注视-姿态耦合以及注视的外源性吸引。

IF 1.7 4区 心理学 Q3 PSYCHOLOGY
Roy S Hessels, Peitong Li, Sofia Balali, Martin K Teunisse, Ronald Poppe, Diederick C Niehorster, Marcus Nyström, Jeroen S Benjamins, Atsushi Senju, Albert A Salah, Ignace T C Hooge
{"title":"双人互动中的注视-动作耦合、注视-姿态耦合以及注视的外源性吸引。","authors":"Roy S Hessels, Peitong Li, Sofia Balali, Martin K Teunisse, Ronald Poppe, Diederick C Niehorster, Marcus Nyström, Jeroen S Benjamins, Atsushi Senju, Albert A Salah, Ignace T C Hooge","doi":"10.3758/s13414-024-02978-4","DOIUrl":null,"url":null,"abstract":"<p><p>In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions.\",\"authors\":\"Roy S Hessels, Peitong Li, Sofia Balali, Martin K Teunisse, Ronald Poppe, Diederick C Niehorster, Marcus Nyström, Jeroen S Benjamins, Atsushi Senju, Albert A Salah, Ignace T C Hooge\",\"doi\":\"10.3758/s13414-024-02978-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.</p>\",\"PeriodicalId\":55433,\"journal\":{\"name\":\"Attention Perception & Psychophysics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Attention Perception & Psychophysics\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3758/s13414-024-02978-4\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Attention Perception & Psychophysics","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13414-024-02978-4","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

在人际交往中,凝视可用于获取目标行动信息、获取与交往伙伴的行动相关的信息,以及用于多模态交流。目前,还没有在视觉背景下的注视行为模型能充分包含这三个组成部分。在这项研究中,我们旨在揭示和量化人内凝视-动作耦合、凝视-手势和凝视-言语耦合,以及一个人的凝视与另一个人的手动动作、手势或言语(或外来的凝视吸引)之间的耦合模式。我们的研究表明,在乐高积木模型复制协作任务中,人与人之间的注视-动作耦合最强,其次是人与人之间的注视-手势耦合,以及注视与他人动作之间的耦合。当试图通过自己或他人的手动动作、手势或语言来推断注视位置时,发现与基线模型相比,只有自己的手动动作能带来更好的推断效果。在推断注视位置方面的改进是有限的,这与之前研究的预期相反。我们认为,推断注视位置可能在受限任务中最为有效,因为在这些任务中,不同的手动操作会按照快速的顺序进行,而在无约束的对话环境中或合作需要更多协商时,注视-手势和注视-语音的耦合可能会更强。我们的研究结果可作为未来理论和模型开发的经验基础,并可进一步用于(社交)机器人和有效人机交互的行动/意图预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions.

In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.60
自引率
17.60%
发文量
197
审稿时长
4-8 weeks
期刊介绍: The journal Attention, Perception, & Psychophysics is an official journal of the Psychonomic Society. It spans all areas of research in sensory processes, perception, attention, and psychophysics. Most articles published are reports of experimental work; the journal also presents theoretical, integrative, and evaluative reviews. Commentary on issues of importance to researchers appears in a special section of the journal. Founded in 1966 as Perception & Psychophysics, the journal assumed its present name in 2009.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信