{"title":"TSELM:使用离散时标和语言模型提取目标发言人","authors":"Beilong Tang, Bang Zeng, Ming Li","doi":"arxiv-2409.07841","DOIUrl":null,"url":null,"abstract":"We propose TSELM, a novel target speaker extraction network that leverages\ndiscrete tokens and language models. TSELM utilizes multiple discretized layers\nfrom WavLM as input tokens and incorporates cross-attention mechanisms to\nintegrate target speaker information. Language models are employed to capture\nthe sequence dependencies, while a scalable HiFi-GAN is used to reconstruct the\naudio from the tokens. By applying a cross-entropy loss, TSELM models the\nprobability distribution of output tokens, thus converting the complex\nregression problem of audio generation into a classification task. Experimental\nresults show that TSELM achieves excellent results in speech quality and\ncomparable results in speech intelligibility.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TSELM: Target Speaker Extraction using Discrete Tokens and Language Models\",\"authors\":\"Beilong Tang, Bang Zeng, Ming Li\",\"doi\":\"arxiv-2409.07841\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose TSELM, a novel target speaker extraction network that leverages\\ndiscrete tokens and language models. TSELM utilizes multiple discretized layers\\nfrom WavLM as input tokens and incorporates cross-attention mechanisms to\\nintegrate target speaker information. Language models are employed to capture\\nthe sequence dependencies, while a scalable HiFi-GAN is used to reconstruct the\\naudio from the tokens. By applying a cross-entropy loss, TSELM models the\\nprobability distribution of output tokens, thus converting the complex\\nregression problem of audio generation into a classification task. Experimental\\nresults show that TSELM achieves excellent results in speech quality and\\ncomparable results in speech intelligibility.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07841\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07841","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
TSELM: Target Speaker Extraction using Discrete Tokens and Language Models
We propose TSELM, a novel target speaker extraction network that leverages
discrete tokens and language models. TSELM utilizes multiple discretized layers
from WavLM as input tokens and incorporates cross-attention mechanisms to
integrate target speaker information. Language models are employed to capture
the sequence dependencies, while a scalable HiFi-GAN is used to reconstruct the
audio from the tokens. By applying a cross-entropy loss, TSELM models the
probability distribution of output tokens, thus converting the complex
regression problem of audio generation into a classification task. Experimental
results show that TSELM achieves excellent results in speech quality and
comparable results in speech intelligibility.