GesturePub Date : 2023-11-27DOI: 10.1075/gest.22001.wil
Beatrijs Wille, Hilde Nyffels, O. Capirci
{"title":"The road to language through gesture","authors":"Beatrijs Wille, Hilde Nyffels, O. Capirci","doi":"10.1075/gest.22001.wil","DOIUrl":"https://doi.org/10.1075/gest.22001.wil","url":null,"abstract":"This study explores the role of gestures in Flemish Sign Language (VGT) development through a longitudinal observation of three deaf children’s early interactions. These children were followed over a period of one and a half year, at the ages of 6, 9, 12, 18 and 24 months. This research compares the communicative development of a deaf child growing up in a deaf family and two deaf children growing up in hearing families. The latter two children received early cochlear implants when they were respectively 10 and 7 months old. It is the first study describing the types and tokens of children’s gestures used in early dyadic interactions in Flanders (Belgium). The description of our observations shows three distinct developmental patterns in terms of the use of gestures and the production of combinations. The study supports the finding that children’s gestural output is subject to their parental language, and it further indicates an impact of age of cochlear implantation.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"80 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139228588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-11-14DOI: 10.1075/gest.21021.don
Andrea Marquardt Donovan, Sarah A. Brown, Martha W. Alibali
{"title":"Weakest link or strongest link?","authors":"Andrea Marquardt Donovan, Sarah A. Brown, Martha W. Alibali","doi":"10.1075/gest.21021.don","DOIUrl":"https://doi.org/10.1075/gest.21021.don","url":null,"abstract":"Teachers often use gestures to connect representations of mathematical ideas. This research examined (1) whether\u0000 such linking gestures help students understand connections among representations and (2) whether sets of gestures that include\u0000 repeated handshapes and motions – termed gestural catchments – are particularly beneficial. Undergraduates viewed\u0000 one of four video lessons connecting two representations of multiplication. In the control lesson, the instructor\u0000 produced beat gestures that did not link the representations. In the link-only lesson, the instructor used\u0000 gestures to link representations, but the gestures did not form a catchment. In the consistent-catchment lesson,\u0000 the instructor highlighted corresponding elements of the two representations using identical gestures. In the\u0000 inconsistent-catchment lesson, the instructor highlighted non-corresponding elements of the two\u0000 representations using identical gestures. Participants who saw the lesson with the consistent catchment – which highlighted\u0000 similarities between representations – were most likely to understand the novel representation and to report learning from the\u0000 lesson.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":"31 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134901077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-09-01DOI: 10.1075/gest.22012.rau
S. Rauzy, Mary Amoyal
{"title":"Automatic tool to annotate smile intensities in conversational face-to-face interactions","authors":"S. Rauzy, Mary Amoyal","doi":"10.1075/gest.22012.rau","DOIUrl":"https://doi.org/10.1075/gest.22012.rau","url":null,"abstract":"\u0000 This study presents an automatic tool that allows to trace smile intensities along a video record of\u0000 conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following\u0000 the Smiling Intensity Scale (Gironzetti, Attardo, and Pickering,\u0000 2016), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this\u0000 tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be\u0000 detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First,\u0000 the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the\u0000 labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent\u0000 for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox\u0000 OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The\u0000 documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page\u0000 https://github.com/srauzy/HMAD (last access 31 July 2023).","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42991983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-08-24DOI: 10.1075/gest.20019.san
Iván Sánchez-Borges, C. J. Álvarez
{"title":"Iconic gestures serve as primes for both auditory and visual word forms","authors":"Iván Sánchez-Borges, C. J. Álvarez","doi":"10.1075/gest.20019.san","DOIUrl":"https://doi.org/10.1075/gest.20019.san","url":null,"abstract":"\u0000Previous studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45461372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-08-24DOI: 10.1075/gest.21001.inb
Anna Inbar
{"title":"The Raised Index Finger gesture in Hebrew multimodal interaction","authors":"Anna Inbar","doi":"10.1075/gest.21001.inb","DOIUrl":"https://doi.org/10.1075/gest.21001.inb","url":null,"abstract":"\u0000 The present study examines the roles that the gesture of the Raised Index Finger (RIF) plays in Hebrew multimodal\u0000 interaction. The study reveals that the RIF is associated with diverse linguistic phenomena and tends to appear in contexts in\u0000 which the speaker presents a message or speech act that violates the hearer’s expectations (based on either general knowledge or\u0000 prior discourse). The study suggests that the RIF serves the function of discourse deixis: Speakers point to\u0000 their message, creating a referent in the extralinguistic context to which they refer as an object of their stance, evaluating the\u0000 content of the utterance or speech act as unexpected by the hearer, and displaying epistemic authority. Setting up such a frame by\u0000 which the information is to be interpreted provides the basis for a swifter update of the common ground in situations of (assumed)\u0000 differences between the assumptions of the speaker and the hearer.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43533089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-08-21DOI: 10.1075/gest.18020.nic
E. Nicoladis, Paula Marentette, Candace Lam
{"title":"Co-speech gestures can interfere with learning foreign language words*","authors":"E. Nicoladis, Paula Marentette, Candace Lam","doi":"10.1075/gest.18020.nic","DOIUrl":"https://doi.org/10.1075/gest.18020.nic","url":null,"abstract":"\u0000 Co-speech gestures can help the learning, processing, and memory of words and concepts, particularly motoric and spatial\u0000 concepts such as verbs. The purpose of the present studies was to test whether co-speech gestures support the learning of words through gist\u0000 traces of movement. We asked English monolinguals to learn 40 Cantonese words (20 verbs and 20 nouns). In two studies, we found support for\u0000 the gist traces of congruent gestures being movement: participants who saw congruent gestures while hearing Cantonese words thought they had\u0000 seen more verbs than participants in any other condition. However, gist traces were unrelated to the accurate recall of either nouns or\u0000 verbs. In both studies, learning Cantonese words accompanied by congruent gestures tended to interfere with the learning of nouns (but not\u0000 verbs). In Study 2, we ruled out the possibility that this interference was due either to gestures conveying representational information in\u0000 another medium or to distraction from moving hands. We argue that gestures can interfere with learning foreign language words when they\u0000 represent the referents (e.g., show shape or size) because learners must interpret the hands as something other than hands.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49444759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-07-25DOI: 10.1075/gest.21008.ric
Alexander Rice
{"title":"A recurring absence gesture in Northern Pastaza Kichwa","authors":"Alexander Rice","doi":"10.1075/gest.21008.ric","DOIUrl":"https://doi.org/10.1075/gest.21008.ric","url":null,"abstract":"\u0000 In this paper I posit the use of a spread-fingered hand torque gesture among speakers of Northern Pastaza Kichwa\u0000 (Quechuan, Ecuador) as a recurrent gesture conveying the semantic theme of absence. The data come from a documentary\u0000 video corpus collected by multiple researchers. The gesture prototypically takes the form of at least one pair of rapid rotations\u0000 of the palm (the torque). Fingers can be spread or slightly flexed towards the palm to varying degrees. This gesture is performed\u0000 in a consistent manner across speakers (and expressions) and co-occurs with a set of speech strings with related semantic\u0000 meanings. Taking a cognitive linguistic approach, I analyse the form, function, and contexts of this gesture and argue that, taken\u0000 together, it should be considered a recurrent gesture that indicates absence.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43059345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GesturePub Date : 2023-07-04DOI: 10.1075/gest.21005.wil
Robert F. Williams
{"title":"Coordinating and sharing gesture spaces in collaborative reasoning","authors":"Robert F. Williams","doi":"10.1075/gest.21005.wil","DOIUrl":"https://doi.org/10.1075/gest.21005.wil","url":null,"abstract":"\u0000 In collaborative reasoning about what causes the seasons, phases of the moon, and tides, participants (three to\u0000 four per group) introduce ideas by gesturing depictively in personal space. Other group members copy and vary these gestures,\u0000 imbuing their gesture spaces with similar conceptual properties. This leads at times to gestures being produced in shared space as\u0000 members elaborate and contest a developing group model. Gestures in the shared space mostly coincide with conversational turns;\u0000 more rarely, participants gesture collaboratively as they enact a joint conception. An emergent shared space is sustained by the\u0000 joint focus and actions of participants and may be repositioned, reoriented, or reshaped to meet changing representational demands\u0000 as the discourse develops. Shared space is used alongside personal spaces, and further research could shed light on how gesture\u0000 placement and other markers (such as eye gaze) contribute to the meaning or function of gestures in group activity.","PeriodicalId":35125,"journal":{"name":"Gesture","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46155409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}