{"title":"AudioBERT: Audio Knowledge Augmented Language Model","authors":"Hyunjong Ok, Suho Yoo, Jaeho Lee","doi":"arxiv-2409.08199","DOIUrl":null,"url":null,"abstract":"Recent studies have identified that language models, pretrained on text-only\ndatasets, often lack elementary visual knowledge, \\textit{e.g.,} colors of\neveryday objects. Motivated by this observation, we ask whether a similar\nshortcoming exists in terms of the \\textit{auditory} knowledge. To answer this\nquestion, we construct a new dataset called AuditoryBench, which consists of\ntwo novel tasks for evaluating auditory knowledge. Based on our analysis using\nthe benchmark, we find that language models also suffer from a severe lack of\nauditory knowledge. To address this limitation, we propose AudioBERT, a novel\nmethod to augment the auditory knowledge of BERT through a retrieval-based\napproach. First, we detect auditory knowledge spans in prompts to query our\nretrieval model efficiently. Then, we inject audio knowledge into BERT and\nswitch on low-rank adaptation for effective adaptation when audio knowledge is\nrequired. Our experiments demonstrate that AudioBERT is quite effective,\nachieving superior performance on the AuditoryBench. The dataset and code are\navailable at \\bulurl{https://github.com/HJ-Ok/AudioBERT}.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent studies have identified that language models, pretrained on text-only
datasets, often lack elementary visual knowledge, \textit{e.g.,} colors of
everyday objects. Motivated by this observation, we ask whether a similar
shortcoming exists in terms of the \textit{auditory} knowledge. To answer this
question, we construct a new dataset called AuditoryBench, which consists of
two novel tasks for evaluating auditory knowledge. Based on our analysis using
the benchmark, we find that language models also suffer from a severe lack of
auditory knowledge. To address this limitation, we propose AudioBERT, a novel
method to augment the auditory knowledge of BERT through a retrieval-based
approach. First, we detect auditory knowledge spans in prompts to query our
retrieval model efficiently. Then, we inject audio knowledge into BERT and
switch on low-rank adaptation for effective adaptation when audio knowledge is
required. Our experiments demonstrate that AudioBERT is quite effective,
achieving superior performance on the AuditoryBench. The dataset and code are
available at \bulurl{https://github.com/HJ-Ok/AudioBERT}.