{"title":"从动作捕捉数据解码韵律信息:共同语音手势的重力。","authors":"Jacob P Momsen, Seana Coulson","doi":"10.1162/opmi_a_00196","DOIUrl":null,"url":null,"abstract":"<p><p>In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker. Specifically, we address whether kinetic descriptions of human movement are relevant for modeling their relationship with speech in time. To test this, we apply experimental manipulations that either highlight or obscure the relationship between co-speech movement kinematics and downward gravitational acceleration. Across two experiments, we provide evidence that quantifying co-speech movement as a function of its anisotropic relation to downward gravitational forces improves how well those co-speech movements can be used to predict prosodic dimensions of speech, as represented by the low-pass envelope. This study supports theoretical perspectives that invoke biomechanics to help explain speech-gesture synchrony and offers motivation for further behavioral or neuroimaging work investigating audiovisual integration and/or biological motion perception in the context of multimodal discourse.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"652-664"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058326/pdf/","citationCount":"0","resultStr":"{\"title\":\"Decoding Prosodic Information from Motion Capture Data: The Gravity of Co-Speech Gestures.\",\"authors\":\"Jacob P Momsen, Seana Coulson\",\"doi\":\"10.1162/opmi_a_00196\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker. Specifically, we address whether kinetic descriptions of human movement are relevant for modeling their relationship with speech in time. To test this, we apply experimental manipulations that either highlight or obscure the relationship between co-speech movement kinematics and downward gravitational acceleration. Across two experiments, we provide evidence that quantifying co-speech movement as a function of its anisotropic relation to downward gravitational forces improves how well those co-speech movements can be used to predict prosodic dimensions of speech, as represented by the low-pass envelope. This study supports theoretical perspectives that invoke biomechanics to help explain speech-gesture synchrony and offers motivation for further behavioral or neuroimaging work investigating audiovisual integration and/or biological motion perception in the context of multimodal discourse.</p>\",\"PeriodicalId\":32558,\"journal\":{\"name\":\"Open Mind\",\"volume\":\"9 \",\"pages\":\"652-664\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058326/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Open Mind\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1162/opmi_a_00196\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Mind","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/opmi_a_00196","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
Decoding Prosodic Information from Motion Capture Data: The Gravity of Co-Speech Gestures.
In part due to correspondence in time, seeing how a speaking body moves can impact how speech is apprehended. Despite this, little is known about whether and which specific kinematic features of co-speech movements are relevant for their integration with speech. The current study uses machine learning techniques to investigate how co-speech gestures can be quantified to model vocal acoustics within an individual speaker. Specifically, we address whether kinetic descriptions of human movement are relevant for modeling their relationship with speech in time. To test this, we apply experimental manipulations that either highlight or obscure the relationship between co-speech movement kinematics and downward gravitational acceleration. Across two experiments, we provide evidence that quantifying co-speech movement as a function of its anisotropic relation to downward gravitational forces improves how well those co-speech movements can be used to predict prosodic dimensions of speech, as represented by the low-pass envelope. This study supports theoretical perspectives that invoke biomechanics to help explain speech-gesture synchrony and offers motivation for further behavioral or neuroimaging work investigating audiovisual integration and/or biological motion perception in the context of multimodal discourse.