Music Perception最新文献

筛选
英文 中文
Across-Channel Auditory Gap Detection 跨通道听觉间隙检测
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-09-09 DOI: 10.1525/mp.2020.38.1.66
A. J. Weaver, Matthew Hoch, Lindsey Soles Quinn, J. Blumsack
{"title":"Across-Channel Auditory Gap Detection","authors":"A. J. Weaver, Matthew Hoch, Lindsey Soles Quinn, J. Blumsack","doi":"10.1525/mp.2020.38.1.66","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.66","url":null,"abstract":"In studies of perceptual and neural processing differences between musicians and nonmusicians, participants are typically dichotomized on the basis of personal report of musical experience. The present study relates self-reported musical experience and objectively measured musical aptitude to a skill that is important in music perception: temporal resolution (or acuity). The Advanced Measures of Music Audiation (AMMA) test was used to objectively assess participant musical aptitude, and adaptive psychophysical measurements were obtained to assess temporal resolution on two tasks: within-channel gap detection and across-channel gap detection. Results suggest that musical aptitude measured with the AMMA and self-reporting of music experiences (duration of music instruction) are both related to temporal resolution ability in musicians. The relationship between musical aptitude and/or duration of music training is important to music educators advocating for the benefits of music programs as well as in behavioral and neurophysiological research.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.66","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49215768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experience of Groove Questionnaire Groove问卷的经验
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-09-01 DOI: 10.1525/mp.2020.38.1.46
Olivier Senn, T. Bechtold, Dawn Rose, Guilherme Câmara, Nina Düvel, R. Jerjen, Lorenz Kilchenmann, Florian Hoesl, A. Baldassarre, Elena Alessandri
{"title":"Experience of Groove Questionnaire","authors":"Olivier Senn, T. Bechtold, Dawn Rose, Guilherme Câmara, Nina Düvel, R. Jerjen, Lorenz Kilchenmann, Florian Hoesl, A. Baldassarre, Elena Alessandri","doi":"10.1525/mp.2020.38.1.46","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.46","url":null,"abstract":"Music often triggers a pleasurable urge in listeners to move their bodies in response to the rhythm. In music psychology, this experience is commonly referred to as groove. This study presents the Experience of Groove Questionnaire, a newly developed self-report questionnaire that enables respondents to subjectively assess how strongly they feel an urge to move and pleasure while listening to music. The development of the questionnaire was carried out in several stages: candidate questionnaire items were generated on the basis of the groove literature, and their suitability was judged by fifteen groove and rhythm research experts. Two listening experiments were carried out in order to reduce the number of items, to validate the instrument, and to estimate its reliability. The final questionnaire consists of two scales with three items each that reliably measure respondents’ urge to move (Cronbach’s α = .92) and their experience of pleasure (α = .97) while listening to music. The two scales are highly correlated (r = .80), which indicates a strong association between motor and emotional responses to music. The scales of the Experience of Groove Questionnaire can independently be applied in groove research and in a variety of other research contexts in which listeners’ subjective experience of music-induced movement and enjoyment need to be addressed: for example the study of the interaction between music and motivation in sports and research on therapeutic applications of music in people with neurological movement disorders.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.46","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45847261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Classical Rondos and Sonatas as Stylistic Categories 作为风格范畴的古典回旋曲和奏鸣曲
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-06-10 DOI: 10.1525/mp.2020.37.5.373
Jonathan de Souza, Adam Roy, Andrew Goldman
{"title":"Classical Rondos and Sonatas as Stylistic Categories","authors":"Jonathan de Souza, Adam Roy, Andrew Goldman","doi":"10.1525/mp.2020.37.5.373","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.373","url":null,"abstract":"Sonata and rondo movements are often defined in terms of large-scale form, yet in the classical era, rondos were also identified according to their lively, cheerful character. We hypothesized that sonatas and rondos could be categorized based on stylistic features, and that rondos would involve more acoustic cues for happiness (e.g., higher average pitch height and higher average attack rate). In a corpus analysis, we examined paired movement openings from 180 instrumental works, composed between 1770 and 1799. Rondos had significantly higher pitch height and attack rate, as predicted, and there were also significant differences related to dynamics, meter, and cadences. We then conducted an experiment involving participants with at least 5 years of formal music training or less than 6 months of formal music training. Participants listened to 120 15-second audio clips, taken from the beginnings of movements in our corpus. After a training phase, they attempted to categorize the excerpts (2AFC task). D-prime scores were significantly higher than chance levels for both groups, and in post-experiment questionnaires, participants without music training reported that rondos sounded happier than sonatas. Overall, these results suggest that classical formal types have distinct stylistic and affective conventions.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.5.373","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45786864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comparing Methods for Analyzing Music-Evoked Autobiographical Memories 音乐诱发的自传体记忆分析方法比较
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-06-01 DOI: 10.1525/mp.2020.37.5.392
Amy M. Belfi, Elena Bai, Ava Stroud
{"title":"Comparing Methods for Analyzing Music-Evoked Autobiographical Memories","authors":"Amy M. Belfi, Elena Bai, Ava Stroud","doi":"10.1525/mp.2020.37.5.392","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.392","url":null,"abstract":"The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.5.392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48721281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Learning Music From Each Other: Synchronization, Turn-taking, or Imitation? 相互学习音乐:同步、轮流还是模仿?
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-06-01 DOI: 10.1525/mp.2020.37.5.403
A. Schiavio, Jan Stupacher, R. Parncutt, R. Timmers
{"title":"Learning Music From Each Other: Synchronization, Turn-taking, or Imitation?","authors":"A. Schiavio, Jan Stupacher, R. Parncutt, R. Timmers","doi":"10.1525/mp.2020.37.5.403","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.403","url":null,"abstract":"In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49359118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The Selectivity of Musical Advantage 音乐优势的选择性
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-06-01 DOI: 10.1525/mp.2020.37.5.423
William Choi
{"title":"The Selectivity of Musical Advantage","authors":"William Choi","doi":"10.1525/mp.2020.37.5.423","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.423","url":null,"abstract":"The OPERA hypothesis theorizes how musical experience heightens perceptual acuity to lexical tones. One missing element in the hypothesis is whether musical advantage is general to all or specific to some lexical tones. To further extend the hypothesis, this study investigated whether English musicians consistently outperformed English nonmusicians in perceiving a variety of Cantonese tones. In an AXB discrimination task, the musicians exhibited superior discriminatory performance over the nonmusicians only in the high level, high rising, and mid-level tone contexts. Similarly, in a Cantonese tone sequence recall task, the musicians significantly outperformed the nonmusicians only in the contour tone context but not in the level tone context. Collectively, the results reflect the selectivity of musical advantage—musical experience is only advantageous to the perception of some but not all Cantonese tones, and elements of selectivity can be introduced to the OPERA hypothesis. Methodologically, the findings highlight the need to include a wide variety of lexical tone contrasts when studying music-to-language transfer.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46146110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Musicianship Enhances Perception But Not Feeling of Emotion From Others’ Social Interaction Through Speech Prosody 音乐通过言语韵律增强他人社交中的情感感知而非情感感受
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-03-11 DOI: 10.1525/MP.2020.37.4.323
Eliot Farmer, Crescent Jicol, K. Petrini
{"title":"Musicianship Enhances Perception But Not Feeling of Emotion From Others’ Social Interaction Through Speech Prosody","authors":"Eliot Farmer, Crescent Jicol, K. Petrini","doi":"10.1525/MP.2020.37.4.323","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.323","url":null,"abstract":"Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and audiovisual clips consisting of the biological motion and speech prosody of two agents interacting. Participants judged as quickly as possible whether the expressed emotion was happiness or anger, and subsequently indicated whether they also felt the emotion they had perceived. Measures of accuracy and reaction time were collected from the emotion recognition judgements, while yes/no responses were collected as indication of felt emotions. Musicians were more accurate than nonmusicians at recognizing emotion in the auditory-only condition, but not in the visual-only or audiovisual conditions. Although music training enhanced recognition of emotion through sound, it did not affect the felt emotion. These findings indicate that emotional processing in music and language may use overlapping but also divergent resources, or that some aspects of emotional processing are less responsive to music training than others. Hence music training may be an effective rehabilitative device for interpreting others’ emotion through speech.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47084844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Response to Invited Commentaries on The Territory Between Speech and Song 对《演讲与歌曲之间的地域》特邀评论的回应
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-03-11 DOI: 10.1525/MP.2020.37.4.366
Fred Cummins
{"title":"Response to Invited Commentaries on The Territory Between Speech and Song","authors":"Fred Cummins","doi":"10.1525/MP.2020.37.4.366","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.366","url":null,"abstract":"","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45532413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensorimotor synchronisation with higher metrical levels in music shortens perceived time. 音乐中具有较高韵律水平的感觉运动同步缩短了感知时间。
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-03-11 DOI: 10.1525/MP.2020.37.4.263
David Hammerschmidt, Clemens Wöllner
{"title":"Sensorimotor synchronisation with higher metrical levels in music shortens perceived time.","authors":"David Hammerschmidt, Clemens Wöllner","doi":"10.1525/MP.2020.37.4.263","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.263","url":null,"abstract":"The aim of the present study was to investigate if the perception of time is affected by actively attending to different metrical levels in musical rhythmic patterns. In an experiment with a repeated-measures design, musicians and non-musicians were presented with musical rhythmic patterns played at three different tempi. They synchronised with multiple metrical levels (half notes, quarter notes, eighth notes) of these patterns using a finger-tapping paradigm and listened without tapping. After each trial, stimulus duration was judged using a verbal estimation paradigm. Results show that the metrical level participants synchronised with influenced perceived time: actively attending to a higher metrical level (half notes, longer inter-tap intervals) led to the shortest time estimations, hence time was experienced as passing more quickly. Listening without tapping led to the longest time estimations. The faster the tempo of the patterns, the longer the time estimation. While there were no differences between musicians and non-musicians, those participants who tapped more consistently and accurately (as analysed by circular statistics) estimated durations to be shorter. Thus, attending to different metrical levels in music, by deliberately directing attention and motor activity, affects time perception.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46105163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Joint Speech and Its Relation to Joint Action 共同言语及其与共同行为的关系
IF 2.3 2区 心理学
Music Perception Pub Date : 2020-03-11 DOI: 10.1525/MP.2020.37.4.359
F. Russo
{"title":"Joint Speech and Its Relation to Joint Action","authors":"F. Russo","doi":"10.1525/MP.2020.37.4.359","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.359","url":null,"abstract":"In his article “The Territory Between Speech and Song: A Joint Speech Perspective,” Cummins (2020) argues that research has failed to adequately recognize an important category of vocal activity that falls outside of the domains of language and music, at least as they are typically defined. This category, referred to by Cummins as joint speech, spans a range of vocal activity so broad that it is not possible to define it using musical or phonetic terms. Instead, the feature that draws the varied examples together is vocal activity that is coordinated across participants and embedded in a physical and social context. In this invited commentary, I argue that although joint speech adds an important thread to the discourse on the relations between speech and song by putting an emphasis on the collective, it is ultimately related to a wider class of joint action phenomena found in the animal kingdom.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47437535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信