Autumn B. Hostetter, Stuart H. Murch, Lyla Rothschild, Cierra S. Gillard
{"title":"看到手势是减轻还是增加负荷?","authors":"Autumn B. Hostetter, Stuart H. Murch, Lyla Rothschild, Cierra S. Gillard","doi":"10.1075/GEST.17017.HOS","DOIUrl":null,"url":null,"abstract":"\n We examined the cognitive resources\n involved in processing speech with gesture compared to the same speech without\n gesture across four studies using a dual-task paradigm. Participants viewed videos of a woman describing\n spatial arrays either with gesture or without. They then attempted to choose the\n target array from among four choices. Participants’ cognitive load was measured\n as they completed this comprehension task by measuring how well they could\n remember the location and identity of digits in a secondary task. We found that addressees experience additional visuospatial load when processing gestures compared to speech alone, and that the load primarily comes when addressees attempt to use their memory of the descriptions with gesture to choose the target array. However,\n this cost only occurs when gestures about horizontal spatial relations (i.e.,\n left and right) are produced from the speaker’s egocentric perspective.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2018-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Does seeing gesture lighten or increase the\\n load?\",\"authors\":\"Autumn B. Hostetter, Stuart H. Murch, Lyla Rothschild, Cierra S. Gillard\",\"doi\":\"10.1075/GEST.17017.HOS\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n We examined the cognitive resources\\n involved in processing speech with gesture compared to the same speech without\\n gesture across four studies using a dual-task paradigm. Participants viewed videos of a woman describing\\n spatial arrays either with gesture or without. They then attempted to choose the\\n target array from among four choices. Participants’ cognitive load was measured\\n as they completed this comprehension task by measuring how well they could\\n remember the location and identity of digits in a secondary task. We found that addressees experience additional visuospatial load when processing gestures compared to speech alone, and that the load primarily comes when addressees attempt to use their memory of the descriptions with gesture to choose the target array. However,\\n this cost only occurs when gestures about horizontal spatial relations (i.e.,\\n left and right) are produced from the speaker’s egocentric perspective.\",\"PeriodicalId\":35125,\"journal\":{\"name\":\"Gesture\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2018-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Gesture\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1075/GEST.17017.HOS\",\"RegionNum\":4,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gesture","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1075/GEST.17017.HOS","RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
We examined the cognitive resources
involved in processing speech with gesture compared to the same speech without
gesture across four studies using a dual-task paradigm. Participants viewed videos of a woman describing
spatial arrays either with gesture or without. They then attempted to choose the
target array from among four choices. Participants’ cognitive load was measured
as they completed this comprehension task by measuring how well they could
remember the location and identity of digits in a secondary task. We found that addressees experience additional visuospatial load when processing gestures compared to speech alone, and that the load primarily comes when addressees attempt to use their memory of the descriptions with gesture to choose the target array. However,
this cost only occurs when gestures about horizontal spatial relations (i.e.,
left and right) are produced from the speaker’s egocentric perspective.
期刊介绍:
Gesture publishes articles reporting original research, as well as survey and review articles, on all aspects of gesture. The journal aims to stimulate and facilitate scholarly communication between the different disciplines within which work on gesture is conducted. For this reason papers written in the spirit of cooperation between disciplines are especially encouraged. Topics may include, but are by no means limited to: the relationship between gesture and speech; the role gesture may play in communication in all the circumstances of social interaction, including conversations, the work-place or instructional settings; gesture and cognition; the development of gesture in children.