{"title":"METEOR: Melody-aware Texture-controllable Symbolic Orchestral Music Generation","authors":"Dinh-Viet-Toan Le, Yi-Hsuan Yang","doi":"arxiv-2409.11753","DOIUrl":null,"url":null,"abstract":"Western music is often characterized by a homophonic texture, in which the\nmusical content can be organized into a melody and an accompaniment. In\norchestral music, in particular, the composer can select specific\ncharacteristics for each instrument's part within the accompaniment, while also\nneeding to adapt the melody to suit the capabilities of the instruments\nperforming it. In this work, we propose METEOR, a model for Melody-aware\nTexture-controllable Orchestral music generation. This model performs symbolic\nmulti-track music style transfer with a focus on melodic fidelity. We allow\nbar- and track-level controllability of the accompaniment with various textural\nattributes while keeping a homophonic texture. We show that the model can\nachieve controllability performances similar to strong baselines while greatly\nimprove melodic fidelity.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Western music is often characterized by a homophonic texture, in which the
musical content can be organized into a melody and an accompaniment. In
orchestral music, in particular, the composer can select specific
characteristics for each instrument's part within the accompaniment, while also
needing to adapt the melody to suit the capabilities of the instruments
performing it. In this work, we propose METEOR, a model for Melody-aware
Texture-controllable Orchestral music generation. This model performs symbolic
multi-track music style transfer with a focus on melodic fidelity. We allow
bar- and track-level controllability of the accompaniment with various textural
attributes while keeping a homophonic texture. We show that the model can
achieve controllability performances similar to strong baselines while greatly
improve melodic fidelity.