Yufeng Yang, Desh Raj, Ju Lin, Niko Moritz, Junteng Jia, Gil Keren, Egor Lakomkin, Yiteng Huang, Jacob Donley, Jay Mahadeokar, Ozlem Kalinli
{"title":"M-BEST-RQ: A Multi-Channel Speech Foundation Model for Smart Glasses","authors":"Yufeng Yang, Desh Raj, Ju Lin, Niko Moritz, Junteng Jia, Gil Keren, Egor Lakomkin, Yiteng Huang, Jacob Donley, Jay Mahadeokar, Ozlem Kalinli","doi":"arxiv-2409.11494","DOIUrl":null,"url":null,"abstract":"The growing popularity of multi-channel wearable devices, such as smart\nglasses, has led to a surge of applications such as targeted speech recognition\nand enhanced hearing. However, current approaches to solve these tasks use\nindependently trained models, which may not benefit from large amounts of\nunlabeled data. In this paper, we propose M-BEST-RQ, the first multi-channel\nspeech foundation model for smart glasses, which is designed to leverage\nlarge-scale self-supervised learning (SSL) in an array-geometry agnostic\napproach. While prior work on multi-channel speech SSL only evaluated on\nsimulated settings, we curate a suite of real downstream tasks to evaluate our\nmodel, namely (i) conversational automatic speech recognition (ASR), (ii)\nspherical active source localization, and (iii) glasses wearer voice activity\ndetection, which are sourced from the MMCSG and EasyCom datasets. We show that\na general-purpose M-BEST-RQ encoder is able to match or surpass supervised\nmodels across all tasks. For the conversational ASR task in particular, using\nonly 8 hours of labeled speech, our model outperforms a supervised ASR baseline\nthat is trained on 2000 hours of labeled data, which demonstrates the\neffectiveness of our approach.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The growing popularity of multi-channel wearable devices, such as smart
glasses, has led to a surge of applications such as targeted speech recognition
and enhanced hearing. However, current approaches to solve these tasks use
independently trained models, which may not benefit from large amounts of
unlabeled data. In this paper, we propose M-BEST-RQ, the first multi-channel
speech foundation model for smart glasses, which is designed to leverage
large-scale self-supervised learning (SSL) in an array-geometry agnostic
approach. While prior work on multi-channel speech SSL only evaluated on
simulated settings, we curate a suite of real downstream tasks to evaluate our
model, namely (i) conversational automatic speech recognition (ASR), (ii)
spherical active source localization, and (iii) glasses wearer voice activity
detection, which are sourced from the MMCSG and EasyCom datasets. We show that
a general-purpose M-BEST-RQ encoder is able to match or surpass supervised
models across all tasks. For the conversational ASR task in particular, using
only 8 hours of labeled speech, our model outperforms a supervised ASR baseline
that is trained on 2000 hours of labeled data, which demonstrates the
effectiveness of our approach.