Autoregressive Self-Evaluation: A Case Study of Music Generation Using Large Language Models

Berker Banar, S. Colton
{"title":"Autoregressive Self-Evaluation: A Case Study of Music Generation Using Large Language Models","authors":"Berker Banar, S. Colton","doi":"10.1109/CAI54212.2023.00118","DOIUrl":null,"url":null,"abstract":"Autoregressive models have shown significant success in many tasks such as natural language generation and music composition. However, generic training mechanisms with off-the-shelf loss functions (e.g. cross-entropy), where not much attention is paid to the specifics of the task, do not necessarily guarantee success as different data modalities (e.g. text, visuals, music) exhibit different natures. In this study, we present a novel autoregressive self-evaluation framework to assess the performance of autoregressive models with both domain-agnostic and domain-specific metrics. We demonstrate this strategy with a case study of music generation using GPT-2 within a transfer learning paradigm. We contrast and compare the effects of fundamental parameters in autoregressive generation such as the temperature in sampling and the length of the generated sequence.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Autoregressive models have shown significant success in many tasks such as natural language generation and music composition. However, generic training mechanisms with off-the-shelf loss functions (e.g. cross-entropy), where not much attention is paid to the specifics of the task, do not necessarily guarantee success as different data modalities (e.g. text, visuals, music) exhibit different natures. In this study, we present a novel autoregressive self-evaluation framework to assess the performance of autoregressive models with both domain-agnostic and domain-specific metrics. We demonstrate this strategy with a case study of music generation using GPT-2 within a transfer learning paradigm. We contrast and compare the effects of fundamental parameters in autoregressive generation such as the temperature in sampling and the length of the generated sequence.
自回归自我评价:基于大型语言模型的音乐生成案例研究
自回归模型在自然语言生成和音乐创作等许多任务中取得了显著的成功。然而,具有现成损失函数(例如交叉熵)的通用训练机制,在不太关注任务细节的情况下,并不一定保证成功,因为不同的数据模式(例如文本,视觉效果,音乐)表现出不同的性质。在这项研究中,我们提出了一个新的自回归自评价框架来评估具有领域不可知和领域特定度量的自回归模型的性能。我们通过在迁移学习范式中使用GPT-2进行音乐生成的案例研究来证明这一策略。我们对比和比较了基本参数在自回归生成中的影响,如采样温度和生成序列的长度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信