{"title":"EzAudio: Enhancing Text-to-Audio Generation with Efficient Diffusion Transformer","authors":"Jiarui Hai, Yong Xu, Hao Zhang, Chenxing Li, Helin Wang, Mounya Elhilali, Dong Yu","doi":"arxiv-2409.10819","DOIUrl":null,"url":null,"abstract":"Latent diffusion models have shown promising results in text-to-audio (T2A)\ngeneration tasks, yet previous models have encountered difficulties in\ngeneration quality, computational cost, diffusion sampling, and data\npreparation. In this paper, we introduce EzAudio, a transformer-based T2A\ndiffusion model, to handle these challenges. Our approach includes several key\ninnovations: (1) We build the T2A model on the latent space of a 1D waveform\nVariational Autoencoder (VAE), avoiding the complexities of handling 2D\nspectrogram representations and using an additional neural vocoder. (2) We\ndesign an optimized diffusion transformer architecture specifically tailored\nfor audio latent representations and diffusion modeling, which enhances\nconvergence speed, training stability, and memory usage, making the training\nprocess easier and more efficient. (3) To tackle data scarcity, we adopt a\ndata-efficient training strategy that leverages unlabeled data for learning\nacoustic dependencies, audio caption data annotated by audio-language models\nfor text-to-audio alignment learning, and human-labeled data for fine-tuning.\n(4) We introduce a classifier-free guidance (CFG) rescaling method that\nsimplifies EzAudio by achieving strong prompt alignment while preserving great\naudio quality when using larger CFG scores, eliminating the need to struggle\nwith finding the optimal CFG score to balance this trade-off. EzAudio surpasses\nexisting open-source models in both objective metrics and subjective\nevaluations, delivering realistic listening experiences while maintaining a\nstreamlined model structure, low training costs, and an easy-to-follow training\npipeline. Code, data, and pre-trained models are released at:\nhttps://haidog-yaqub.github.io/EzAudio-Page/.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10819","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Latent diffusion models have shown promising results in text-to-audio (T2A)
generation tasks, yet previous models have encountered difficulties in
generation quality, computational cost, diffusion sampling, and data
preparation. In this paper, we introduce EzAudio, a transformer-based T2A
diffusion model, to handle these challenges. Our approach includes several key
innovations: (1) We build the T2A model on the latent space of a 1D waveform
Variational Autoencoder (VAE), avoiding the complexities of handling 2D
spectrogram representations and using an additional neural vocoder. (2) We
design an optimized diffusion transformer architecture specifically tailored
for audio latent representations and diffusion modeling, which enhances
convergence speed, training stability, and memory usage, making the training
process easier and more efficient. (3) To tackle data scarcity, we adopt a
data-efficient training strategy that leverages unlabeled data for learning
acoustic dependencies, audio caption data annotated by audio-language models
for text-to-audio alignment learning, and human-labeled data for fine-tuning.
(4) We introduce a classifier-free guidance (CFG) rescaling method that
simplifies EzAudio by achieving strong prompt alignment while preserving great
audio quality when using larger CFG scores, eliminating the need to struggle
with finding the optimal CFG score to balance this trade-off. EzAudio surpasses
existing open-source models in both objective metrics and subjective
evaluations, delivering realistic listening experiences while maintaining a
streamlined model structure, low training costs, and an easy-to-follow training
pipeline. Code, data, and pre-trained models are released at:
https://haidog-yaqub.github.io/EzAudio-Page/.