{"title":"Parallel computation of time-varying convolution","authors":"Victor Lazzarini","doi":"10.1080/09298215.2020.1810280","DOIUrl":"https://doi.org/10.1080/09298215.2020.1810280","url":null,"abstract":"This paper introduces a method for computing the time-varying convolution in parallel. It discusses the motivations for this approach, detailing the limitations with the current serial implementation. A detailed review of the signal processing involved is presented, describing the time-varying filter as a modification of the time-invariant case. This is followed by description of the parallel method, which is then implemented in the Open Computing Language. An analysis of tests result is provided, detailing the improvements on the existing approach and noting the cases where it is not the most suitable option.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"403 - 415"},"PeriodicalIF":1.1,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1810280","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49405774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Drum rhythm spaces: From polyphonic similarity to generative maps","authors":"Daniel Gómez-Marín, S. Jordà, P. Herrera","doi":"10.1080/09298215.2020.1806887","DOIUrl":"https://doi.org/10.1080/09298215.2020.1806887","url":null,"abstract":"This paper reports on the design and evaluation of drum rhythm spaces as interactive bi-dimensional maps used for the visualisation, retrieval and generation of drum patterns. We carry out two experiments exploring human processing of polyphonic drum patterns concluding with a list of descriptors that significantly influence similarity sensations. These features are used to build spaces based on drum pattern collections, where patterns are organised by similarity, modelled according to human perception. A drum-interpolation algorithm is introduced (and evaluated) to enhance rhythm space functionality by means of patterns that bound it, converting a discrete space to a continuous generative one.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"438 - 456"},"PeriodicalIF":1.1,"publicationDate":"2020-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1806887","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47931973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabian C. Moss, Willian Fernandes Souza, M. Rohrmeier
{"title":"Harmony and form in Brazilian Choro: A corpus-driven approach to musical style analysis","authors":"Fabian C. Moss, Willian Fernandes Souza, M. Rohrmeier","doi":"10.1080/09298215.2020.1797109","DOIUrl":"https://doi.org/10.1080/09298215.2020.1797109","url":null,"abstract":"This corpus study constitutes the first quantitative style analysis of Choro, a primarily instrumental music genre that emerged in Brazil at the end of the 19th century. We evaluate its description in a recent comprehensive textbook by transcribing the chord symbols and formal structure of the 295 representative pieces in the Choro Songbook. Our approach uncovers central stylistic traits of this musical idiom on empirical grounds. It thus advances data-driven musical style analysis by studying both harmony and form in a musical genre that lies outside the traditional canon.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"416 - 437"},"PeriodicalIF":1.1,"publicationDate":"2020-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1797109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44184134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Playing technique classification for bowed string instruments from raw audio","authors":"A. Kruger, J. P. Jacobs","doi":"10.1080/09298215.2020.1784957","DOIUrl":"https://doi.org/10.1080/09298215.2020.1784957","url":null,"abstract":"Music instrument playing technique classification based on raw audio is a relatively unexplored area of music information retrieval research. This study systematically investigates the use of traditional audio features augmented by features based on the Hartley transform, used as input to a multiclass support vector machine (SVM) classifier, to identify up to 11 different playing techniques performed on each of the violin, viola, cello, and contrabass. Furthermore, 36- and 44-class joint instrument and playing technique classifiers were developed that achieved macro-average F-measures exceeding 0.88. Our approach expands and improves on the state-of-the-art study, which implemented sparse-coded magnitude and phase-derived spectral features.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"320 - 333"},"PeriodicalIF":1.1,"publicationDate":"2020-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1784957","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42447795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Christian Thiesen, R. Kopiez, Daniel Müllensiefen, Christoph Reuter, Isabella Czedik-Eysenberg
{"title":"Duration, song section, entropy: Suggestions for a model of rapid music recognition processes","authors":"Felix Christian Thiesen, R. Kopiez, Daniel Müllensiefen, Christoph Reuter, Isabella Czedik-Eysenberg","doi":"10.1080/09298215.2020.1784955","DOIUrl":"https://doi.org/10.1080/09298215.2020.1784955","url":null,"abstract":"In an online study, N = 517 participants rated 48 very short musical stimuli comprised of well-known pop songs with regard to arrangement parameters and cross-modal variables. Identification rates for songs and artists ranged between 0-7%. We observed associations between increasing stimulus durations as well as structural sections (chorus or verse) and detection rates. Analyses of the cross-modal variables revealed a main factor, representing the perceived ‘orderliness' of a plink as a strong predictor for title recognition. When psychoacoustic low-level features were entered, Spectral Entropy became the main predictor. The presence of a singing voice additionally seemed to facilitate recognition processes.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"334 - 348"},"PeriodicalIF":1.1,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1784955","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46745704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurable changes in piano performance of scales and arpeggios following a Body Mapping workshop","authors":"Teri Slade, G. Comeau, D. Russell","doi":"10.1080/09298215.2020.1784958","DOIUrl":"https://doi.org/10.1080/09298215.2020.1784958","url":null,"abstract":"Body Mapping is becoming increasingly popular among musicians as an educational approach to improve bodily movement and thereby the audible quality of music performances. This study used MIDI data to quantitatively measure changes in scale and arpeggio piano performance one day before and one day after a Body Mapping workshop. While there were subtle changes in the MIDI data, these changes were generally neither statistically significant, nor a magnitude that would be audible. Based on these findings, we theorise that reports of immediate improvements to music performance originate in visual dominance: audience members observe changes in bodily movement and perceive this as improved sound quality.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"362 - 372"},"PeriodicalIF":1.1,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1784958","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47845076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discrete Fourier transform-based method for analysis of a vibrato tone","authors":"Hee-Suk Pang, Jun-Seok Lim, Seokjin Lee","doi":"10.1080/09298215.2020.1784959","DOIUrl":"https://doi.org/10.1080/09298215.2020.1784959","url":null,"abstract":"Vibrato is one of the most common musical techniques used for the enrichment of vocal and musical instrument sounds. We propose a method that can analyse the intonation, vibrato rate, and vibrato extent of a vibrato tone as a function of time, which is based on the discrete Fourier transform of its fundamental frequency trajectory. According to experimental results, the proposed method is robust to the irregularities in the fundamental frequency trajectory. In addition, the proposed method provides different results for intonation and vibrato extent from those of Prame’s method when the fundamental frequency trajectory is not perfectly sinusoidal.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"307 - 319"},"PeriodicalIF":1.1,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1784959","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45539528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Redefining sad music: Music’s structure suggests at least two sad states","authors":"L. Warrenburg","doi":"10.1080/09298215.2020.1784956","DOIUrl":"https://doi.org/10.1080/09298215.2020.1784956","url":null,"abstract":"Many researchers have noted inconsistencies between descriptions and effects of nominally sad music. The current study addresses whether traditional music-related sadness can be broken down into more than one category. Melancholic and grieving musical passages were collected in three stages. Participants with superior aural skills rated 18 structural parameters of these musical passages on 7-point unipolar scales. The results are consistent with the idea that musical parameters differ in melancholic and grieving states and that what has been previously defined as sad music may, in fact, be conflating more than one emotional state.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"373 - 386"},"PeriodicalIF":1.1,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1784956","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46512575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steffen Lepa, Martin Herzog, J. Steffens, Andreas Schoenrock, Hauke Egermann
{"title":"A computational model for predicting perceived musical expression in branding scenarios","authors":"Steffen Lepa, Martin Herzog, J. Steffens, Andreas Schoenrock, Hauke Egermann","doi":"10.1080/09298215.2020.1778041","DOIUrl":"https://doi.org/10.1080/09298215.2020.1778041","url":null,"abstract":"We describe the development of a computational model predicting listener-perceived expressions of music in branding contexts. Representative ground truth from multi-national online listening experiments was combined with machine learning of music branding expert knowledge, and audio signal analysis toolbox outputs. A mixture of random forest and traditional regression models is able to predict average ratings of perceived brand image on four dimensions. Resulting cross-validated prediction accuracy (R²) was Arousal: 61%, Valence: 44%, Authenticity: 55%, and Timeliness: 74%. Audio descriptors for rhythm, instrumentation, and musical style contributed most. Adaptive sub-models for different marketing target groups further increase prediction accuracy.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"387 - 402"},"PeriodicalIF":1.1,"publicationDate":"2020-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1778041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44537978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of tempo on relative note durations in a performed samba groove","authors":"Mari Romarheim Haugen, A. Danielsen","doi":"10.1080/09298215.2020.1767655","DOIUrl":"https://doi.org/10.1080/09298215.2020.1767655","url":null,"abstract":"Previous studies have revealed uneven duration patterns at the sixteenth note level of samba. In the present study, we investigated the influence of tempo on such sixteenth-note patterns in a performed samba groove.The results revealed an uneven duration pattern in all tempi. Interestingly, the shortest note becomes relatively shorter and the longest relatively longer as the tempo increases. We suggest that the differences in relative durations between tempi reflect the need to maintain the samba sixteenth note ‘template’ in all tempi: producing the samba ‘feel’ requires that relative durations have to be adjusted to tempo.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"349 - 361"},"PeriodicalIF":1.1,"publicationDate":"2020-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1767655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49439412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}