Melanie S. Schreiner, Martin Zettersten, Christina Bergmann, Michael C. Frank, Tom Fritzsche, Nayeli Gonzalez-Gomez, Kiley Hamlin, Natalia Kartushina, Danielle J. Kellier, Nivedita Mani, Julien Mayor, Jenny Saffran, Mohinish Shukla, Priya Silverstein, Melanie Soderstrom, Matthias Lippold
{"title":"在一项大型预先登记的婴儿实验中,婴儿引导言语偏好的重复测试可靠性证据有限。","authors":"Melanie S. Schreiner, Martin Zettersten, Christina Bergmann, Michael C. Frank, Tom Fritzsche, Nayeli Gonzalez-Gomez, Kiley Hamlin, Natalia Kartushina, Danielle J. Kellier, Nivedita Mani, Julien Mayor, Jenny Saffran, Mohinish Shukla, Priya Silverstein, Melanie Soderstrom, Matthias Lippold","doi":"10.1111/desc.13551","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>Test-retest reliability—establishing that measurements remain consistent across multiple testing sessions—is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants’ preference for infant-directed speech over adult-directed speech in a large sample (<i>N </i>= 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants’ speech preference (overall <i>r</i> = 0.09, 95% CI [−0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants.</p>\n </section>\n \n <section>\n \n <h3> Research Highlights</h3>\n \n <div>\n <ul>\n \n <li>We assessed test-retest reliability of infants’ preference for infant-directed over adult-directed speech in a large pre-registered sample (<i>N </i>= 158).</li>\n \n <li>There was no consistent evidence of test-retest reliability in measures of infants’ speech preference.</li>\n \n <li>Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size.</li>\n \n <li>Developmental research relying on stable individual differences should consider the underlying reliability of its measures.</li>\n </ul>\n </div>\n </section>\n </div>","PeriodicalId":48392,"journal":{"name":"Developmental Science","volume":"27 6","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/desc.13551","citationCount":"0","resultStr":"{\"title\":\"Limited evidence of test-retest reliability in infant-directed speech preference in a large preregistered infant experiment\",\"authors\":\"Melanie S. Schreiner, Martin Zettersten, Christina Bergmann, Michael C. Frank, Tom Fritzsche, Nayeli Gonzalez-Gomez, Kiley Hamlin, Natalia Kartushina, Danielle J. Kellier, Nivedita Mani, Julien Mayor, Jenny Saffran, Mohinish Shukla, Priya Silverstein, Melanie Soderstrom, Matthias Lippold\",\"doi\":\"10.1111/desc.13551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <p>Test-retest reliability—establishing that measurements remain consistent across multiple testing sessions—is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants’ preference for infant-directed speech over adult-directed speech in a large sample (<i>N </i>= 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants’ speech preference (overall <i>r</i> = 0.09, 95% CI [−0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Research Highlights</h3>\\n \\n <div>\\n <ul>\\n \\n <li>We assessed test-retest reliability of infants’ preference for infant-directed over adult-directed speech in a large pre-registered sample (<i>N </i>= 158).</li>\\n \\n <li>There was no consistent evidence of test-retest reliability in measures of infants’ speech preference.</li>\\n \\n <li>Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size.</li>\\n \\n <li>Developmental research relying on stable individual differences should consider the underlying reliability of its measures.</li>\\n </ul>\\n </div>\\n </section>\\n </div>\",\"PeriodicalId\":48392,\"journal\":{\"name\":\"Developmental Science\",\"volume\":\"27 6\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/desc.13551\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Developmental Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/desc.13551\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, DEVELOPMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Developmental Science","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/desc.13551","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, DEVELOPMENTAL","Score":null,"Total":0}
引用次数: 0
摘要
测试重复可靠性--确定测量结果在多次测试中保持一致--对于测量、理解和预测婴儿语言发展的个体差异至关重要。然而,以往在婴儿言语感知任务中建立测量可靠性的尝试非常有限,而常用婴儿测量方法的可靠性在很大程度上也是未知的。本研究在 "ManyBabies "1 合作研究项目的背景下,通过大样本(样本数 = 158)调查了婴儿对婴儿引导的语言而非成人引导的语言的偏好的重测可靠性。我们要求实验室对参与研究的婴儿进行第二次预约,重新测试婴儿对婴儿引导语言的偏好。通过这种方法,我们可以估算出用于调查婴儿期倾听偏好的三种不同方法的重测可靠性:转头偏好程序、中心固定和眼动跟踪。总体而言,我们没有发现一致的证据表明婴儿语言偏好的测量结果具有重测可靠性(总体 r = 0.09,95% CI [-0.06,0.25])。虽然增加婴儿参与分析所需的试验次数可以从数字上提高重测可靠性,但这也大大减少了研究的有效样本量。因此,未来的婴儿发展研究应考虑到,并非所有的实验措施都适合评估婴儿的个体差异。研究亮点:我们评估了大量预先登记样本(N = 158)中婴儿对婴儿引导语言的偏好相对于成人引导语言的重测可靠性。没有一致的证据表明婴儿语言偏好的测量结果具有重测可靠性。采用更严格的标准纳入参与者可能会提高重测可靠性,但代价是样本量的大幅减少。依赖于稳定的个体差异的发展研究应考虑其测量的基本可靠性。
Limited evidence of test-retest reliability in infant-directed speech preference in a large preregistered infant experiment
Test-retest reliability—establishing that measurements remain consistent across multiple testing sessions—is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants’ preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants’ speech preference (overall r = 0.09, 95% CI [−0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants.
Research Highlights
We assessed test-retest reliability of infants’ preference for infant-directed over adult-directed speech in a large pre-registered sample (N = 158).
There was no consistent evidence of test-retest reliability in measures of infants’ speech preference.
Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size.
Developmental research relying on stable individual differences should consider the underlying reliability of its measures.
期刊介绍:
Developmental Science publishes cutting-edge theory and up-to-the-minute research on scientific developmental psychology from leading thinkers in the field. It is currently the only journal that specifically focuses on human developmental cognitive neuroscience. Coverage includes: - Clinical, computational and comparative approaches to development - Key advances in cognitive and social development - Developmental cognitive neuroscience - Functional neuroimaging of the developing brain