儿童言语和婴儿发声的自监督言语模型分析。

Jialu Li, Mark Hasegawa-Johnson, Nancy L McElwain
{"title":"儿童言语和婴儿发声的自监督言语模型分析。","authors":"Jialu Li, Mark Hasegawa-Johnson, Nancy L McElwain","doi":"10.1109/ICASSPW62465.2024.10626416","DOIUrl":null,"url":null,"abstract":"<p><p>To understand why self-supervised learning (SSL) models have empirically achieved strong performances on several speech-processing downstream tasks, numerous studies have focused on analyzing the encoded information of the SSL layer representations in adult speech. Limited work has investigated how pre-training and fine-tuning affect SSL models encoding children's speech and vocalizations. In this study, we aim to bridge this gap by probing SSL models on two relevant downstream tasks: (1) phoneme recognition (PR) on the speech of adults, older children (8-10 years old), and younger children (1-4 years old), and (2) vocalization classification (VC) distinguishing cry, fuss, and babble for infants under 14 months old. For younger children's PR, the superiority of fine-tuned SSL models is largely due to their ability to learn features that represent older children's speech and then adapt those features to the speech of younger children. For infant VC, SSL models pre-trained on large-scale home recordings learn to leverage phonetic representations at middle layers, and thereby enhance the performance of this task.</p>","PeriodicalId":520867,"journal":{"name":"... IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops","volume":"2024 ","pages":"550-554"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12126097/pdf/","citationCount":"0","resultStr":"{\"title\":\"ANALYSIS OF SELF-SUPERVISED SPEECH MODELS ON CHILDREN'S SPEECH AND INFANT VOCALIZATIONS.\",\"authors\":\"Jialu Li, Mark Hasegawa-Johnson, Nancy L McElwain\",\"doi\":\"10.1109/ICASSPW62465.2024.10626416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>To understand why self-supervised learning (SSL) models have empirically achieved strong performances on several speech-processing downstream tasks, numerous studies have focused on analyzing the encoded information of the SSL layer representations in adult speech. Limited work has investigated how pre-training and fine-tuning affect SSL models encoding children's speech and vocalizations. In this study, we aim to bridge this gap by probing SSL models on two relevant downstream tasks: (1) phoneme recognition (PR) on the speech of adults, older children (8-10 years old), and younger children (1-4 years old), and (2) vocalization classification (VC) distinguishing cry, fuss, and babble for infants under 14 months old. For younger children's PR, the superiority of fine-tuned SSL models is largely due to their ability to learn features that represent older children's speech and then adapt those features to the speech of younger children. For infant VC, SSL models pre-trained on large-scale home recordings learn to leverage phonetic representations at middle layers, and thereby enhance the performance of this task.</p>\",\"PeriodicalId\":520867,\"journal\":{\"name\":\"... IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops\",\"volume\":\"2024 \",\"pages\":\"550-554\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12126097/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"... IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICASSPW62465.2024.10626416\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/15 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"... IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSPW62465.2024.10626416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/15 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

为了理解为什么自监督学习(SSL)模型在几个语音处理下游任务上取得了出色的表现,许多研究都集中在分析成人语音中SSL层表示的编码信息上。有限的工作是研究预训练和微调如何影响编码儿童语音和发声的SSL模型。在本研究中,我们的目标是通过探索SSL模型在两个相关的下游任务上弥补这一差距:(1)成人、大儿童(8-10岁)和幼儿(1-4岁)语音的音素识别(PR),以及(2)14个月以下婴儿的发声分类(VC)区分哭泣、大惊小怪和咿呀学语。对于年龄较小的儿童的公共关系,微调SSL模型的优势主要是由于它们能够学习代表年龄较大的儿童的语言特征,然后将这些特征适应于年龄较小的儿童的语言。对于初级VC,在大规模家庭录音上预先训练的SSL模型学习利用中间层的语音表示,从而提高该任务的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ANALYSIS OF SELF-SUPERVISED SPEECH MODELS ON CHILDREN'S SPEECH AND INFANT VOCALIZATIONS.

To understand why self-supervised learning (SSL) models have empirically achieved strong performances on several speech-processing downstream tasks, numerous studies have focused on analyzing the encoded information of the SSL layer representations in adult speech. Limited work has investigated how pre-training and fine-tuning affect SSL models encoding children's speech and vocalizations. In this study, we aim to bridge this gap by probing SSL models on two relevant downstream tasks: (1) phoneme recognition (PR) on the speech of adults, older children (8-10 years old), and younger children (1-4 years old), and (2) vocalization classification (VC) distinguishing cry, fuss, and babble for infants under 14 months old. For younger children's PR, the superiority of fine-tuned SSL models is largely due to their ability to learn features that represent older children's speech and then adapt those features to the speech of younger children. For infant VC, SSL models pre-trained on large-scale home recordings learn to leverage phonetic representations at middle layers, and thereby enhance the performance of this task.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信