Désirée Schoenherr, Alisa Shugaley, Franziska Roller, Lukas A. Knitter, Bernhard Strauss, Uwe Altmann
{"title":"Extracting vocal characteristics and calculating vocal synchrony using Praat and R: A tutorial","authors":"Désirée Schoenherr, Alisa Shugaley, Franziska Roller, Lukas A. Knitter, Bernhard Strauss, Uwe Altmann","doi":"10.5964/meth.9375","DOIUrl":null,"url":null,"abstract":"<p xmlns=\"http://www.ncbi.nlm.nih.gov/JATS1\">In clinical research, the dependence of the results on the methods used is frequently discussed. In research on nonverbal synchrony, human ratings or automated methods do not lead to congruent results. Even when automated methods are used, the choice of the method and parameter settings are important to obtain congruent results. However, these are often insufficiently reported and do not meet the standard of transparency and reproducibility. This tutorial is aimed at researchers who are not familiar with the software Praat and R and shows in detail how to extract acoustic features like fundamental frequency or speech rate from video or audio files in conversations. Furthermore, it is presented how vocal synchrony indices can be calculated from these characteristics to represent how well two interaction partners vocally adapt to each other. All used scripts as well as a minimal example, can be found on the Open Science Framework and Github.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"78 1","pages":"0"},"PeriodicalIF":2.0000,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5964/meth.9375","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0
Abstract
In clinical research, the dependence of the results on the methods used is frequently discussed. In research on nonverbal synchrony, human ratings or automated methods do not lead to congruent results. Even when automated methods are used, the choice of the method and parameter settings are important to obtain congruent results. However, these are often insufficiently reported and do not meet the standard of transparency and reproducibility. This tutorial is aimed at researchers who are not familiar with the software Praat and R and shows in detail how to extract acoustic features like fundamental frequency or speech rate from video or audio files in conversations. Furthermore, it is presented how vocal synchrony indices can be calculated from these characteristics to represent how well two interaction partners vocally adapt to each other. All used scripts as well as a minimal example, can be found on the Open Science Framework and Github.