Lindsay Hippe, Victoria Hennessy, Naja Ferjan Ramirez, T. Christina Zhao
{"title":"比较北美婴儿出生后头两年在家庭环境中的语音和音乐输入。","authors":"Lindsay Hippe, Victoria Hennessy, Naja Ferjan Ramirez, T. Christina Zhao","doi":"10.1111/desc.13528","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants’ daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants’ home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4</p>\n </section>\n \n <section>\n \n <h3> Research Highlights</h3>\n \n <div>\n <ul>\n \n <li>This study is the first to compare music input to speech input in infants’ natural home environment across infancy.</li>\n \n <li>We utilized a crowdsourcing approach to annotate a longitudinal dataset of daylong audio recordings collected in North American home environments.</li>\n \n <li>Our main results show that infants overall receive significantly more speech input than music input. This gap widens as the infants get older.</li>\n \n <li>Our results also showed that the music input was largely from electronic devices and not intended for the infants, a pattern opposite to speech input.</li>\n </ul>\n </div>\n </section>\n </div>","PeriodicalId":48392,"journal":{"name":"Developmental Science","volume":"27 5","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparison of speech and music input in North American infants’ home environment over the first 2 years of life\",\"authors\":\"Lindsay Hippe, Victoria Hennessy, Naja Ferjan Ramirez, T. Christina Zhao\",\"doi\":\"10.1111/desc.13528\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <p>Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants’ daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants’ home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4</p>\\n </section>\\n \\n <section>\\n \\n <h3> Research Highlights</h3>\\n \\n <div>\\n <ul>\\n \\n <li>This study is the first to compare music input to speech input in infants’ natural home environment across infancy.</li>\\n \\n <li>We utilized a crowdsourcing approach to annotate a longitudinal dataset of daylong audio recordings collected in North American home environments.</li>\\n \\n <li>Our main results show that infants overall receive significantly more speech input than music input. This gap widens as the infants get older.</li>\\n \\n <li>Our results also showed that the music input was largely from electronic devices and not intended for the infants, a pattern opposite to speech input.</li>\\n </ul>\\n </div>\\n </section>\\n </div>\",\"PeriodicalId\":48392,\"journal\":{\"name\":\"Developmental Science\",\"volume\":\"27 5\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Developmental Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/desc.13528\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, DEVELOPMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Developmental Science","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/desc.13528","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, DEVELOPMENTAL","Score":null,"Total":0}
Comparison of speech and music input in North American infants’ home environment over the first 2 years of life
Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants’ daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants’ home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4
Research Highlights
This study is the first to compare music input to speech input in infants’ natural home environment across infancy.
We utilized a crowdsourcing approach to annotate a longitudinal dataset of daylong audio recordings collected in North American home environments.
Our main results show that infants overall receive significantly more speech input than music input. This gap widens as the infants get older.
Our results also showed that the music input was largely from electronic devices and not intended for the infants, a pattern opposite to speech input.
期刊介绍:
Developmental Science publishes cutting-edge theory and up-to-the-minute research on scientific developmental psychology from leading thinkers in the field. It is currently the only journal that specifically focuses on human developmental cognitive neuroscience. Coverage includes: - Clinical, computational and comparative approaches to development - Key advances in cognitive and social development - Developmental cognitive neuroscience - Functional neuroimaging of the developing brain