Felix Christian Thiesen, R. Kopiez, Daniel Müllensiefen, Christoph Reuter, Isabella Czedik-Eysenberg
{"title":"Duration, song section, entropy: Suggestions for a model of rapid music recognition processes","authors":"Felix Christian Thiesen, R. Kopiez, Daniel Müllensiefen, Christoph Reuter, Isabella Czedik-Eysenberg","doi":"10.1080/09298215.2020.1784955","DOIUrl":null,"url":null,"abstract":"In an online study, N = 517 participants rated 48 very short musical stimuli comprised of well-known pop songs with regard to arrangement parameters and cross-modal variables. Identification rates for songs and artists ranged between 0-7%. We observed associations between increasing stimulus durations as well as structural sections (chorus or verse) and detection rates. Analyses of the cross-modal variables revealed a main factor, representing the perceived ‘orderliness' of a plink as a strong predictor for title recognition. When psychoacoustic low-level features were entered, Spectral Entropy became the main predictor. The presence of a singing voice additionally seemed to facilitate recognition processes.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"334 - 348"},"PeriodicalIF":1.1000,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1784955","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of New Music Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/09298215.2020.1784955","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 2
Abstract
In an online study, N = 517 participants rated 48 very short musical stimuli comprised of well-known pop songs with regard to arrangement parameters and cross-modal variables. Identification rates for songs and artists ranged between 0-7%. We observed associations between increasing stimulus durations as well as structural sections (chorus or verse) and detection rates. Analyses of the cross-modal variables revealed a main factor, representing the perceived ‘orderliness' of a plink as a strong predictor for title recognition. When psychoacoustic low-level features were entered, Spectral Entropy became the main predictor. The presence of a singing voice additionally seemed to facilitate recognition processes.
期刊介绍:
The Journal of New Music Research (JNMR) publishes material which increases our understanding of music and musical processes by systematic, scientific and technological means. Research published in the journal is innovative, empirically grounded and often, but not exclusively, uses quantitative methods. Articles are both musically relevant and scientifically rigorous, giving full technical details. No bounds are placed on the music or musical behaviours at issue: popular music, music of diverse cultures and the canon of western classical music are all within the Journal’s scope. Articles deal with theory, analysis, composition, performance, uses of music, instruments and other music technologies. The Journal was founded in 1972 with the original title Interface to reflect its interdisciplinary nature, drawing on musicology (including music theory), computer science, psychology, acoustics, philosophy, and other disciplines.