AudioBERT:音频知识增强语言模型

Hyunjong Ok, Suho Yoo, Jaeho Lee
{"title":"AudioBERT:音频知识增强语言模型","authors":"Hyunjong Ok, Suho Yoo, Jaeho Lee","doi":"arxiv-2409.08199","DOIUrl":null,"url":null,"abstract":"Recent studies have identified that language models, pretrained on text-only\ndatasets, often lack elementary visual knowledge, \\textit{e.g.,} colors of\neveryday objects. Motivated by this observation, we ask whether a similar\nshortcoming exists in terms of the \\textit{auditory} knowledge. To answer this\nquestion, we construct a new dataset called AuditoryBench, which consists of\ntwo novel tasks for evaluating auditory knowledge. Based on our analysis using\nthe benchmark, we find that language models also suffer from a severe lack of\nauditory knowledge. To address this limitation, we propose AudioBERT, a novel\nmethod to augment the auditory knowledge of BERT through a retrieval-based\napproach. First, we detect auditory knowledge spans in prompts to query our\nretrieval model efficiently. Then, we inject audio knowledge into BERT and\nswitch on low-rank adaptation for effective adaptation when audio knowledge is\nrequired. Our experiments demonstrate that AudioBERT is quite effective,\nachieving superior performance on the AuditoryBench. The dataset and code are\navailable at \\bulurl{https://github.com/HJ-Ok/AudioBERT}.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AudioBERT: Audio Knowledge Augmented Language Model\",\"authors\":\"Hyunjong Ok, Suho Yoo, Jaeho Lee\",\"doi\":\"arxiv-2409.08199\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent studies have identified that language models, pretrained on text-only\\ndatasets, often lack elementary visual knowledge, \\\\textit{e.g.,} colors of\\neveryday objects. Motivated by this observation, we ask whether a similar\\nshortcoming exists in terms of the \\\\textit{auditory} knowledge. To answer this\\nquestion, we construct a new dataset called AuditoryBench, which consists of\\ntwo novel tasks for evaluating auditory knowledge. Based on our analysis using\\nthe benchmark, we find that language models also suffer from a severe lack of\\nauditory knowledge. To address this limitation, we propose AudioBERT, a novel\\nmethod to augment the auditory knowledge of BERT through a retrieval-based\\napproach. First, we detect auditory knowledge spans in prompts to query our\\nretrieval model efficiently. Then, we inject audio knowledge into BERT and\\nswitch on low-rank adaptation for effective adaptation when audio knowledge is\\nrequired. Our experiments demonstrate that AudioBERT is quite effective,\\nachieving superior performance on the AuditoryBench. The dataset and code are\\navailable at \\\\bulurl{https://github.com/HJ-Ok/AudioBERT}.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08199\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究发现,在纯文本数据集上进行预训练的语言模型往往缺乏基本的视觉知识,如日常物体的颜色。受这一观察结果的启发,我们提出了一个问题:在textit{听觉}知识方面是否也存在类似的缺陷?为了回答这个问题,我们构建了一个名为 "听觉基准"(AuditoryBench)的新数据集,其中包含两个用于评估听觉知识的新任务。根据我们对该基准的分析,我们发现语言模型也严重缺乏听觉知识。为了解决这一局限性,我们提出了 AudioBERT,这是一种通过基于检索的方法来增强 BERT 听觉知识的新方法。首先,我们检测提示中的听觉知识跨度,以便高效地查询我们的检索模型。然后,我们将音频知识注入 BERT,并在需要音频知识时切换到低阶适配,以实现有效的适配。我们的实验证明,AudioBERT 相当有效,在听觉基准测试(AuditoryBench)中取得了优异的性能。数据集和代码可在(bulurl{https://github.com/HJ-Ok/AudioBERT}.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AudioBERT: Audio Knowledge Augmented Language Model
Recent studies have identified that language models, pretrained on text-only datasets, often lack elementary visual knowledge, \textit{e.g.,} colors of everyday objects. Motivated by this observation, we ask whether a similar shortcoming exists in terms of the \textit{auditory} knowledge. To answer this question, we construct a new dataset called AuditoryBench, which consists of two novel tasks for evaluating auditory knowledge. Based on our analysis using the benchmark, we find that language models also suffer from a severe lack of auditory knowledge. To address this limitation, we propose AudioBERT, a novel method to augment the auditory knowledge of BERT through a retrieval-based approach. First, we detect auditory knowledge spans in prompts to query our retrieval model efficiently. Then, we inject audio knowledge into BERT and switch on low-rank adaptation for effective adaptation when audio knowledge is required. Our experiments demonstrate that AudioBERT is quite effective, achieving superior performance on the AuditoryBench. The dataset and code are available at \bulurl{https://github.com/HJ-Ok/AudioBERT}.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信