Steffen Lepa, Martin Herzog, J. Steffens, Andreas Schoenrock, Hauke Egermann
{"title":"A computational model for predicting perceived musical expression in branding scenarios","authors":"Steffen Lepa, Martin Herzog, J. Steffens, Andreas Schoenrock, Hauke Egermann","doi":"10.1080/09298215.2020.1778041","DOIUrl":null,"url":null,"abstract":"We describe the development of a computational model predicting listener-perceived expressions of music in branding contexts. Representative ground truth from multi-national online listening experiments was combined with machine learning of music branding expert knowledge, and audio signal analysis toolbox outputs. A mixture of random forest and traditional regression models is able to predict average ratings of perceived brand image on four dimensions. Resulting cross-validated prediction accuracy (R²) was Arousal: 61%, Valence: 44%, Authenticity: 55%, and Timeliness: 74%. Audio descriptors for rhythm, instrumentation, and musical style contributed most. Adaptive sub-models for different marketing target groups further increase prediction accuracy.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"387 - 402"},"PeriodicalIF":1.1000,"publicationDate":"2020-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1778041","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of New Music Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/09298215.2020.1778041","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 3
Abstract
We describe the development of a computational model predicting listener-perceived expressions of music in branding contexts. Representative ground truth from multi-national online listening experiments was combined with machine learning of music branding expert knowledge, and audio signal analysis toolbox outputs. A mixture of random forest and traditional regression models is able to predict average ratings of perceived brand image on four dimensions. Resulting cross-validated prediction accuracy (R²) was Arousal: 61%, Valence: 44%, Authenticity: 55%, and Timeliness: 74%. Audio descriptors for rhythm, instrumentation, and musical style contributed most. Adaptive sub-models for different marketing target groups further increase prediction accuracy.
期刊介绍:
The Journal of New Music Research (JNMR) publishes material which increases our understanding of music and musical processes by systematic, scientific and technological means. Research published in the journal is innovative, empirically grounded and often, but not exclusively, uses quantitative methods. Articles are both musically relevant and scientifically rigorous, giving full technical details. No bounds are placed on the music or musical behaviours at issue: popular music, music of diverse cultures and the canon of western classical music are all within the Journal’s scope. Articles deal with theory, analysis, composition, performance, uses of music, instruments and other music technologies. The Journal was founded in 1972 with the original title Interface to reflect its interdisciplinary nature, drawing on musicology (including music theory), computer science, psychology, acoustics, philosophy, and other disciplines.