Gerard I. Gállego, Roy Fejgin, Chunghsin Yeh, Xiaoyu Liu, Gautam Bhattacharya
{"title":"Single-stage TTS with Masked Audio Token Modeling and Semantic Knowledge Distillation","authors":"Gerard I. Gállego, Roy Fejgin, Chunghsin Yeh, Xiaoyu Liu, Gautam Bhattacharya","doi":"arxiv-2409.11003","DOIUrl":null,"url":null,"abstract":"Audio token modeling has become a powerful framework for speech synthesis,\nwith two-stage approaches employing semantic tokens remaining prevalent. In\nthis paper, we aim to simplify this process by introducing a semantic knowledge\ndistillation method that enables high-quality speech generation in a single\nstage. Our proposed model improves speech quality, intelligibility, and speaker\nsimilarity compared to a single-stage baseline. Although two-stage systems\nstill lead in intelligibility, our model significantly narrows the gap while\ndelivering comparable speech quality. These findings showcase the potential of\nsingle-stage models to achieve efficient, high-quality TTS with a more compact\nand streamlined architecture.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Audio token modeling has become a powerful framework for speech synthesis,
with two-stage approaches employing semantic tokens remaining prevalent. In
this paper, we aim to simplify this process by introducing a semantic knowledge
distillation method that enables high-quality speech generation in a single
stage. Our proposed model improves speech quality, intelligibility, and speaker
similarity compared to a single-stage baseline. Although two-stage systems
still lead in intelligibility, our model significantly narrows the gap while
delivering comparable speech quality. These findings showcase the potential of
single-stage models to achieve efficient, high-quality TTS with a more compact
and streamlined architecture.