CoNeTTE: An Efficient Audio Captioning System Leveraging Multiple Datasets With Task Embedding

IF 4.1 2区 计算机科学 Q1 ACOUSTICS
Étienne Labbé;Thomas Pellegrini;Julien Pinquier
{"title":"CoNeTTE: An Efficient Audio Captioning System Leveraging Multiple Datasets With Task Embedding","authors":"Étienne Labbé;Thomas Pellegrini;Julien Pinquier","doi":"10.1109/TASLP.2024.3430813","DOIUrl":null,"url":null,"abstract":"Automated Audio Captioning (AAC) involves generating natural language descriptions of audio content, using encoder-decoder architectures. An audio encoder produces audio embeddings fed to a decoder, usually a Transformer decoder, for caption generation. In this work, we describe our model, which novelty, compared to existing models, lies in the use of a ConvNeXt architecture as audio encoder, adapted from the vision domain to audio classification. This model, called CNext-trans, achieved state-of-the-art scores on the AudioCaps (AC) dataset and performed competitively on Clotho (CL), while using four to forty times fewer parameters than existing models. We examine potential biases in the AC dataset due to its origin from AudioSet by investigating unbiased encoder's impact on performance. Using the well-known PANN's CNN14, for instance, as an unbiased encoder, we observed a 0.017 absolute reduction in SPIDEr score (where higher scores indicate better performance). To improve cross-dataset performance, we conducted experiments by combining multiple AAC datasets (AC, CL, MACS, WavCaps) for training. Although this strategy enhanced overall model performance across datasets, it still fell short compared to models trained specifically on a single target dataset, indicating the absence of a one-size-fits-all model. To mitigate performance gaps between datasets, we introduced a Task Embedding (TE) token, allowing the model to identify the source dataset for each input sample. We provide insights into the impact of these TEs on both the form (words) and content (sound event types) of the generated captions. The resulting model, named CoNeTTE, an unbiased CNext-trans model enriched with dataset-specific Task Embeddings, achieved SPIDEr scores of 0.467 and 0.310 on AC and CL, respectively.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"3785-3794"},"PeriodicalIF":4.1000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10603439/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Automated Audio Captioning (AAC) involves generating natural language descriptions of audio content, using encoder-decoder architectures. An audio encoder produces audio embeddings fed to a decoder, usually a Transformer decoder, for caption generation. In this work, we describe our model, which novelty, compared to existing models, lies in the use of a ConvNeXt architecture as audio encoder, adapted from the vision domain to audio classification. This model, called CNext-trans, achieved state-of-the-art scores on the AudioCaps (AC) dataset and performed competitively on Clotho (CL), while using four to forty times fewer parameters than existing models. We examine potential biases in the AC dataset due to its origin from AudioSet by investigating unbiased encoder's impact on performance. Using the well-known PANN's CNN14, for instance, as an unbiased encoder, we observed a 0.017 absolute reduction in SPIDEr score (where higher scores indicate better performance). To improve cross-dataset performance, we conducted experiments by combining multiple AAC datasets (AC, CL, MACS, WavCaps) for training. Although this strategy enhanced overall model performance across datasets, it still fell short compared to models trained specifically on a single target dataset, indicating the absence of a one-size-fits-all model. To mitigate performance gaps between datasets, we introduced a Task Embedding (TE) token, allowing the model to identify the source dataset for each input sample. We provide insights into the impact of these TEs on both the form (words) and content (sound event types) of the generated captions. The resulting model, named CoNeTTE, an unbiased CNext-trans model enriched with dataset-specific Task Embeddings, achieved SPIDEr scores of 0.467 and 0.310 on AC and CL, respectively.
CoNeTTE:通过任务嵌入利用多个数据集的高效音频字幕系统
自动音频字幕制作(AAC)涉及使用编码器-解码器架构生成音频内容的自然语言描述。音频编码器生成的音频嵌入信息会被送入解码器(通常是变换器解码器),用于生成字幕。在这项工作中,我们介绍了我们的模型,与现有模型相比,该模型的新颖之处在于使用 ConvNeXt 架构作为音频编码器,并从视觉领域调整到音频分类领域。这个名为 CNext-trans 的模型在 AudioCaps(AC)数据集上取得了最先进的成绩,在 Clotho(CL)上的表现也很有竞争力,同时使用的参数比现有模型少四到四十倍。我们通过研究无偏编码器对性能的影响,检查了 AC 数据集由于源自 AudioSet 而可能存在的偏差。例如,使用著名的 PANN's CNN14 作为无偏编码器,我们观察到 SPIDEr 分数绝对值降低了 0.017(分数越高表示性能越好)。为了提高跨数据集的性能,我们结合多个 AAC 数据集(AC、CL、MACS、WavCaps)进行了训练实验。虽然这一策略提高了跨数据集模型的整体性能,但与专门在单一目标数据集上训练的模型相比仍有差距,这表明没有放之四海而皆准的模型。为了缩小数据集之间的性能差距,我们引入了任务嵌入(TE)标记,允许模型识别每个输入样本的源数据集。我们深入分析了这些 TE 对生成字幕的形式(单词)和内容(声音事件类型)的影响。由此产生的模型被命名为 CoNeTTE(一种使用特定数据集任务嵌入丰富的无偏 CNext-trans 模型),在 AC 和 CL 上分别获得了 0.467 和 0.310 的 SPIDEr 分数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE/ACM Transactions on Audio, Speech, and Language Processing
IEEE/ACM Transactions on Audio, Speech, and Language Processing ACOUSTICS-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
11.30
自引率
11.10%
发文量
217
期刊介绍: The IEEE/ACM Transactions on Audio, Speech, and Language Processing covers audio, speech and language processing and the sciences that support them. In audio processing: transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. In speech processing: areas such as speech analysis, synthesis, coding, speech and speaker recognition, speech production and perception, and speech enhancement. In language processing: speech and text analysis, understanding, generation, dialog management, translation, summarization, question answering and document indexing and retrieval, as well as general language modeling.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信