Music PerceptionPub Date : 2020-03-11DOI: 10.1525/MP.2020.37.4.278
Maria A. G. Witek, Jingyi Liu, John Kuubertzie, A. Yankyera, Senyo Adzei, P. Vuust
{"title":"A Critical Cross-cultural Study of Sensorimotor and Groove Responses to Syncopation Among Ghanaian and American University Students and Staff","authors":"Maria A. G. Witek, Jingyi Liu, John Kuubertzie, A. Yankyera, Senyo Adzei, P. Vuust","doi":"10.1525/MP.2020.37.4.278","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.278","url":null,"abstract":"The pleasurable desire to move to a beat is known as groove and is partly explained by rhythmic syncopation. While many contemporary groove-directed genres originated in the African diaspora, groove music psychology has almost exclusively studied European or North American listeners. While cross-cultural approaches can help us understand how different populations respond to music, comparing African and Western musical behaviors has historically tended to rely on stereotypes. Here we report on two studies in which sensorimotor and groove responses to syncopation were measured in university students and staff from Cape Coast, Ghana and Williamstown, MA, United States. In our experimental designs and interpretations, we show sensitivity towards the ethical implications of doing cross-cultural research in an African context. The Ghanaian group showed greater synchronization precision than Americans during monophonic syncopated patterns, but this was not reflected in synchronization accuracy. There was no significant group difference in the pleasurable desire to move. Our results have implications for how we understand the relationship between exposure and synchronization, and how we define syncopation in cultural and musical contexts. We hope our critical approach to cross-cultural comparison contributes to developing music psychology into a more inclusive and culturally grounded field.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47913631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2020-03-11DOI: 10.1525/MP.2020.37.4.298
M. Costa, M. Nese
{"title":"Perceived Tension, Movement, and Pleasantness in Harmonic Musical Intervals and Noises","authors":"M. Costa, M. Nese","doi":"10.1525/MP.2020.37.4.298","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.298","url":null,"abstract":"Perceived valence, tension, and movement of harmonic musical intervals (from the unison to the octave presented in a low- and high-register) and standard noises (brown, pink, white, blue, purple) were assessed in two studies that differed in the crossmodal procedure by which tension and movement were rated: proprioceptive device or visual analog scale. Valence was evaluated in both studies with the visual analog scale. In a preliminary study, the proprioceptive device was calibrated with a psychophysical procedure. Roughness of the stimuli was included as covariate. Tension was perceived higher in dissonant intervals and in intervals presented in the high register. The higher the high-pitch energy content in the standard noise, the higher the perceived tension. The visual analog scale resulted in higher tension ratings than the proprioceptive device. Perception of movement was higher in dissonant intervals, in intervals in the high register, and in standard noises than in musical intervals. High-pitch spectrum noises were associated with more sense of movement than low-pitch spectrum noises. Consonant intervals and low-register intervals were evaluated as more pleasant than dissonant and high-register intervals. High-pitch spectrum purple and blue noises were evaluated as more unpleasant than low-pitch spectrum noises.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43945519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2020-03-11DOI: 10.1525/MP.2020.37.4.347
Fred Cummins
{"title":"The Territory Between Speech and Song","authors":"Fred Cummins","doi":"10.1525/MP.2020.37.4.347","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.347","url":null,"abstract":"Speech and song have frequently been treated as contrasting categories. We here observe a variety of collective activities in which multiple participants utter the same thing at the same time, a behavior we call joint speech. This simple empirical definition serves to single out practices of ritual, protest, and the enactment of identity that span the range from speech to song and allows consideration of the manner in which such activities serve to ground collectives. We consider how the musical elements in joint speech such as rhythm, melody, and instrumentation are related to the context of occurrence and the purposes of the participants. While music and language have been greatly altered by developments in media technologies—from writing to recordings—joint speech has been, and continues to be, an integral part of practices, both formal and informal, from which communities derive their identity. The absence of joint speech from the scientific treatment of language has made language appear as an abstract intellectual and highly individualized activity. Joint speech may act as a corrective to draw our attention back to the voice in context, and the manner in which collective identities are enacted.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/MP.2020.37.4.347","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43559000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2020-02-01DOI: 10.1525/mp.2020.37.3.208
Peter M. C. Harrison,Marcus T. Pearce
{"title":"A Computational Cognitive Model for the Analysis and Generation of Voice Leadings","authors":"Peter M. C. Harrison,Marcus T. Pearce","doi":"10.1525/mp.2020.37.3.208","DOIUrl":"https://doi.org/10.1525/mp.2020.37.3.208","url":null,"abstract":"Voice leading is a common task in Western music composition whose conventions are consistent with fundamental principles of auditory perception. Here we introduce a computational cognitive model of voice leading, intended both for analyzing voice-leading practices within encoded musical corpora and for generating new voice leadings for unseen chord sequences. This model is feature-based, quantifying the desirability of a given voice leading on the basis of different features derived from Huron’s (2001) perceptual account of voice leading. We use the model to analyze a corpus of 370 chorale harmonizations by J. S. Bach, and demonstrate the model’s application to the voicing of harmonic progressions in different musical genres. The model is implemented in a new R package, “voicer,” which we release alongside this paper.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2020-02-01DOI: 10.1525/mp.2020.37.3.240
L. Warrenburg
{"title":"Choosing the Right Tune","authors":"L. Warrenburg","doi":"10.1525/mp.2020.37.3.240","DOIUrl":"https://doi.org/10.1525/mp.2020.37.3.240","url":null,"abstract":"When designing a new study regarding how music can portray and elicit emotion, one of the most crucial design decisions involves choosing the best stimuli. Every researcher must find musical samples that are able to capture an emotional state, are appropriate lengths, and have minimal potential for biasing participants. Researchers have often utilized musical excerpts that have previously been used by other scholars, but the appropriate musical choices depend on the specific goals of the study in question and will likely change among various research designs. The intention of this paper is to examine how musical stimuli have been selected in a sample of 306 research articles dating from 1928 through 2018. Analyses are presented regarding the designated emotions, how the stimuli were selected, the durations of the stimuli, whether the stimuli are excerpts from a longer work, and whether the passages have been used in studies about perceived or induced emotion. The results suggest that the literature relies on nine emotional terms, focuses more on perceived emotion than on induced emotion, and contains mostly short musical stimuli. I suggest that some of the inconclusive results from previous reviews may be due to the inconsistent use of emotion terms throughout the music community.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.3.240","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44363278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2020-02-01DOI: 10.1525/mp.2020.37.3.185
Nori Jacoby, Elizabeth Hellmuth Margulis, Martin Clayton, Erin Hannon, Henkjan Honing, John Iversen, Tobias Robert Klein, Samuel A Mehr, Lara Pearson, Isabelle Peretz, Marc Perlman, Rainer Polak, Andrea Ravignani, Patrick E Savage, Gavin Steingo, Catherine J Stevens, Laurel Trainor, Sandra Trehub, Michael Veal, Melanie Wald-Fuhrmann
{"title":"Cross-Cultural Work in Music Cognition: Challenges, Insights, and Recommendations.","authors":"Nori Jacoby, Elizabeth Hellmuth Margulis, Martin Clayton, Erin Hannon, Henkjan Honing, John Iversen, Tobias Robert Klein, Samuel A Mehr, Lara Pearson, Isabelle Peretz, Marc Perlman, Rainer Polak, Andrea Ravignani, Patrick E Savage, Gavin Steingo, Catherine J Stevens, Laurel Trainor, Sandra Trehub, Michael Veal, Melanie Wald-Fuhrmann","doi":"10.1525/mp.2020.37.3.185","DOIUrl":"10.1525/mp.2020.37.3.185","url":null,"abstract":"<p><p><b>Many foundational questions in the</b> psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of \"music\" and \"culture.\"</p>","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10019032/pdf/nihms-1711599.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9152427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2020-02-01DOI: 10.1525/mp.2020.37.3.225
R. Matsunaga, P. Hartono, K. Yokosawa, J. Abe
{"title":"The Development of Sensitivity to Tonality Structure of Music","authors":"R. Matsunaga, P. Hartono, K. Yokosawa, J. Abe","doi":"10.1525/mp.2020.37.3.225","DOIUrl":"https://doi.org/10.1525/mp.2020.37.3.225","url":null,"abstract":"Tonal schemata are shaped by culture-specific music exposure. The acquisition process of tonal schemata has been delineated in Western mono-musical children, but cross-cultural variations have not been explored. We examined how Japanese children acquire tonal schemata in a bi-musical culture characterized by the simultaneous, and unbalanced, appearances of Western (dominant) music along with traditional Japanese (non-dominant) music. Progress of this acquisition was indexed by gauging children’s sensitivities to musical scale membership (differentiating scale-tones from non-scale-tones) and differences in tonal stability among scale tones (differentiating the tonic from another scale tone). Children (7-, 9-, 11-, 13-, and 14-year-olds) and adults judged how well two types of target tones (scale tone vs. non-scale tone; tonic vs. non-tonic) fit a preceding Western or traditional Japanese tonal context. Results showed that even 7-year-olds showed sensitivity to Western scale membership while sensitivity to Japanese scale membership did not appear until age nine. Also, sensitivity to the tonic emerged at age 13 for both types of melodies. These results suggest that even though they are exposed to both types of music simultaneously from birth, Japanese children begin by acquiring the tonal schema of the dominant Western music and this age of acquisition is not delayed relative to Western mono-musical peers.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.3.225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43772182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effects of Timbre on Neural Responses to Musical Emotion","authors":"Weixia Zhang, Fang Liu, Linshu Zhou, Wan-Chen Wang, Hanyuan Jiang, Cunmei Jiang","doi":"10.1525/mp.2019.37.2.134","DOIUrl":"https://doi.org/10.1525/mp.2019.37.2.134","url":null,"abstract":"Timbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to investigate whether there are different neural responses to musical emotion when the same melodies are presented in different timbres. With a cross-modal affective priming paradigm, target faces were primed by affectively congruent or incongruent melodies without lyrics presented in the violin, flute, and voice. Results showed a larger P3 and a larger left anterior distributed LPC in response to affectively incongruent versus congruent trials in the voice version. For the flute version, however, only the LPC effect was found, which was distributed over centro-parietal electrodes. Unlike the voice and flute versions, an N400 effect was observed in the violin version. These findings revealed different patterns of neural responses to musical emotion when the same melodies were presented in different timbres, and provide evidence for the hypothesis that there are specialized neural responses to the human voice.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.2.134","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43908239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2019-12-01DOI: 10.1525/mp.2019.37.2.165
Sarah A. Sauvé, M. Pearce
{"title":"Information-theoretic Modeling of Perceived Musical Complexity","authors":"Sarah A. Sauvé, M. Pearce","doi":"10.1525/mp.2019.37.2.165","DOIUrl":"https://doi.org/10.1525/mp.2019.37.2.165","url":null,"abstract":"What makes a piece of music appear complex to a listener? This research extends previous work by Eerola (2016), examining information content generated by a computational model of auditory expectation (IDyOM) based on statistical learning and probabilistic prediction as an empirical definition of perceived musical complexity. We systematically manipulated the melody, rhythm, and harmony of short polyphonic musical excerpts using the model to ensure that these manipulations systematically varied information content in the intended direction. Complexity ratings collected from 28 participants were found to positively correlate most strongly with melodic and harmonic information content, which corresponded to descriptive musical features such as the proportion of out-of-key notes and tonal ambiguity. When individual differences were considered, these explained more variance than the manipulated predictors. Musical background was not a significant predictor of complexity ratings. The results support information content, as implemented by IDyOM, as an information-theoretic measure of complexity as well as extending IDyOM9s range of applications to perceived complexity.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.2.165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47886071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music PerceptionPub Date : 2019-12-01DOI: 10.1525/mp.2019.37.2.111
Jo Fougner Skaansar, B. Laeng, A. Danielsen
{"title":"Microtiming and Mental Effort","authors":"Jo Fougner Skaansar, B. Laeng, A. Danielsen","doi":"10.1525/mp.2019.37.2.111","DOIUrl":"https://doi.org/10.1525/mp.2019.37.2.111","url":null,"abstract":"The present study tested two assumptions concerning the auditory processing of microtiming in musical grooves (i.e., repeating, movement-inducing rhythmic patterns): 1) Microtiming challenges the listener's internal framework of timing regularities, or meter, and demands cognitive effort. 2) Microtiming promotes a “groove” experience—a pleasant sense of wanting to move along with the music. Using professional jazz musicians and nonmusicians as participants, we hypothesized that microtiming asynchronies between bass and drums (varying from −80 to 80 ms) were related to a) an increase in “mental effort” (as indexed by pupillometry), and b) a decrease in the quality of sensorimotor synchronization (as indexed by reduced finger tapping stability). We found bass/drums-microtiming asynchronies to be positively related to pupil dilation and negatively related to tapping stability. In contrast, we found that steady timekeeping (presence of eighth note hi-hat in the grooves) decreased pupil size and increased tapping performance, though there were no conclusive differences in pupil response between musicians and nonmusicians. However, jazz musicians consistently tapped with higher stability than nonmusicians, reflecting an effect of rhythmic expertise. Except for the condition most closely resembling real music, participants preferred the on-the-grid grooves to displacements in microtiming and bass-succeeding-drums-conditions were preferred over the reverse.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.2.111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42977943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}