Music Perception最新文献

筛选
英文 中文
Swinging the Score? Swing Phrasing Cannot Be Communicated via Explicit Notation Instructions Alone 作弊?Swing Phrasing不能单独通过显式符号指令进行通信
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.386
C. Corcoran, Jan Stupacher, P. Vuust
{"title":"Swinging the Score? Swing Phrasing Cannot Be Communicated via Explicit Notation Instructions Alone","authors":"C. Corcoran, Jan Stupacher, P. Vuust","doi":"10.1525/mp.2022.39.4.386","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.386","url":null,"abstract":"Jazz musicians usually learn to play with “swing” phrasing by playing by ear. Classical musicians—who play more from musical scores than by ear—are reported to struggle with producing swing. We explored whether classical musicians play with more swing when performing from more detailed swing notation. Thereby we investigated whether a culturally specific improvisational social procedure can be scripted in detailed music notation for musicians from a different performance background. Twenty classical musicians sight-read jazz tunes from three styles of notation, each with a different level of notational complexity. Experienced jazz listeners evaluated the performances. Results showed that more score-independent classical musicians with strong aural abilities played with equally strong swing regardless of notation; more score-dependent musicians swung most with the medium-complexity classical notation. The data suggest that some higher-level swing features, such as appropriate articulation, event durations, and deviations from a beat sequence can be communicated to a limited extent using written instructions. However, their successful implementation in performance depends on matching instructional complexity to a musician’s skill at decoding and interpreting unfamiliar information. This link between decoding skills and cross-cultural performance makes our findings relevant to ethnological and musicological studies of musical communication processes and perception-action coupling.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44031295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Song Imitation in Congenital Amusia 先天性失音症的歌曲模仿
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.341
Ariadne Loutrari, Cunmei Jiang, Fang Liu
{"title":"Song Imitation in Congenital Amusia","authors":"Ariadne Loutrari, Cunmei Jiang, Fang Liu","doi":"10.1525/mp.2022.39.4.341","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.341","url":null,"abstract":"Congenital amusia is a neurogenetic disorder of pitch perception that may also compromise pitch production. Despite amusics’ long documented difficulties with pitch, previous evidence suggests that familiar music may have an implicit facilitative effect on their performance. It remains, however, unknown whether vocal imitation of song in amusia is influenced by melody familiarity and the presence of lyrics. To address this issue, thirteen Mandarin speaking amusics and 13 matched controls imitated novel song segments with lyrics and on the syllable /la/. Eleven out of these participants in each group also imitated segments of a familiar song. Subsequent acoustic analysis was conducted to measure pitch and timing matching accuracy based on eight acoustic measures. While amusics showed worse imitation performance than controls across seven out of the eight pitch and timing measures, melody familiarity was found to have a favorable effect on their performance on three pitch-related acoustic measures. The presence of lyrics did not affect either group’s performance substantially. Correlations were observed between amusics’ performance on the Montreal Battery of Evaluation of Amusia and imitation of the novel song. We discuss implications in terms of music familiarity, memory demands, the relevance of lexical information, and the link between perception and production.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46151830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Associations Between Music Training, Musical Working Memory, and Visuospatial Working Memory 音乐训练、音乐工作记忆和视觉空间工作记忆之间的关系
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.401
Sebastian Silas, Daniel Müllensiefen, R. Gelding, K. Frieler, Peter M. C. Harrison
{"title":"The Associations Between Music Training, Musical Working Memory, and Visuospatial Working Memory","authors":"Sebastian Silas, Daniel Müllensiefen, R. Gelding, K. Frieler, Peter M. C. Harrison","doi":"10.1525/mp.2022.39.4.401","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.401","url":null,"abstract":"Prior research studying the relationship between music training (MT) and more general cognitive faculties, such as visuospatial working memory (VSWM), often fails to include tests of musical memory. This may result in causal pathways between MT and other such variables being misrepresented, potentially explaining certain ambiguous findings in the literature concerning the relationship between MT and executive functions. Here we address this problem using latent variable modeling and causal modeling to study a triplet of variables related to working memory: MT, musical working memory (MWM), and VSWM. The triplet framing allows for the potential application of d-separation (similar to mediation analysis) and V-structure search, which is particularly useful since, in the absence of expensive randomized control trials, it can test causal hypotheses using cross-sectional data. We collected data from 148 participants using a battery of MWM and VSWM tasks as well as a MT questionnaire. Our results suggest: 1) VSWM and MT are unrelated, conditional on MWM; and 2) by implication, there is no far transfer between MT and VSWM without near transfer. However, the data are unable to distinguish an unambiguous causal structure. We conclude by discussing the possibility of extending these models to incorporate more complex or cyclic effects.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46574703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Can the Intended Messages of Mismatched Lexical Tone in Igbo Music Be Understood? A Test for Listeners’ Perception of the Matched Versus Mismatched Compositions 伊博族音乐中不匹配的词汇语调所要传递的信息能被理解吗?听者对匹配与不匹配作文感知的测试
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.371
Sunday Ofuani
{"title":"Can the Intended Messages of Mismatched Lexical Tone in Igbo Music Be Understood? A Test for Listeners’ Perception of the Matched Versus Mismatched Compositions","authors":"Sunday Ofuani","doi":"10.1525/mp.2022.39.4.371","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.371","url":null,"abstract":"In tone languages, alteration of lexical tone changes the intended meaning. This implies that composers should equally match lexical tone in their music for intelligible communication of the intended textual messages, a compositional approach termed Lexical Tone Determinants (LTD) in this study. Yet, in the Ìgbò language setting, some composers creatively disregard/mismatch lexical tone, which is branded as Musical/Creative Determinants (M/CD). It is believed that mismatched lexical tone in Ìgbò music alters listeners’ comprehension of the intended messages; on the other hand, it is argued that thorough match of lexical tone constrains musical creativity. Listeners’ perception of textual messages in LTD and M/CD music has not been empirically tested (side-by-side) to verify whether comprehension is lost or not, at least, in the Ìgbò language context. This empirical void gap is verified in this particular study to substantiate the propositions/findings using comparative measures to collect data through listeners’ perception in live-performance of newly composed LTD and M/CD pieces. Specifically, it examines whether mismatched lexical tone in Ìgbò music alters message comprehension or not. The data were collated, presented, and analyzed statistically with chi-square deployed to evaluate their difference in message comprehension.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47638052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Embodied Meter Revisited 体现仪表重新审视
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.249
P. Toiviainen, Emily Carlson
{"title":"Embodied Meter Revisited","authors":"P. Toiviainen, Emily Carlson","doi":"10.1525/mp.2022.39.3.249","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.249","url":null,"abstract":"Previous research has shown that humans tend to embody musical meter at multiple beat levels during spontaneous dance. This work that been based on identifying typical periodic movement patterns, or eigenmovements, and has relied on time-domain analyses. The current study: 1) presents a novel method of using time-frequency analysis in conjunction with group-level tensor decomposition; 2) compares its results to time-domain analysis, and 3) investigates how the amplitude of eigenmovements depends on musical content and genre. Data comprised three-dimensional motion capture of 72 participants’ spontaneous dance movements to 16 stimuli including eight different genres. Each trial was subjected to a discrete wavelet transform, concatenated into a trial-space-frequency tensor and decomposed using tensor decomposition. Twelve movement primitives, or eigenmovements, were identified, eleven of which were frequency locked with one of four metrical levels. The results suggest that time-frequency decomposition can more efficiently group movement directions together. Furthermore, the employed group-level decomposition allows for a straightforward analysis of interstimulus and interparticipant differences in music-induced movement. Amplitude of eigenmovements was found to depend on the amount of fluctuation in the music in particularly at one- and two-beat levels.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45681023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Idiosyncrasy of Involuntary Musical Imagery Repetition (IMIR) Experiences 非自愿音乐意象重复体验的特殊性
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.320
Taylor A. Liptak, D. Omigie, Georgia A. Floridou
{"title":"The Idiosyncrasy of Involuntary Musical Imagery Repetition (IMIR) Experiences","authors":"Taylor A. Liptak, D. Omigie, Georgia A. Floridou","doi":"10.1525/mp.2022.39.3.320","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.320","url":null,"abstract":"Involuntary musical imagery repetition (IMIR), colloquially known as “earworms,” is a form of musical imagery that arises involuntarily and repeatedly in the mind. A growing number of studies, based on retrospective reports, suggest that IMIR experiences are associated with certain musical features, such as fast tempo and the presence of lyrics, and with individual differences in music training and engagement. However, research to date has not directly assessed the effect of such musical features on IMIR and findings about individual differences in music training and engagement are mixed. Using a cross-sectional design (Study 1, n = 263), we examined IMIR content in terms of tempo (fast, slow) and presence of lyrics (instrumental, vocal), and IMIR characteristics (frequency, duration of episode and section) in relation to 1) the musical content (tempo and lyrics) individuals most commonly expose themselves to (music-listening habits), and 2) music training and engagement. We also used an experimental design (Study 2, n = 80) to test the effects of tempo (fast or slow) and the presence of lyrics (instrumental or vocal) on IMIR retrieval and duration. Results from Study 1 showed that the content of music that individuals are typically exposed to with regard to tempo and lyrics predicted and resembled their IMIR content, and that music engagement, but not music training, predicted IMIR frequency. Music training was, however, shown to predict the duration of IMIR episodes. In the experiment (Study 2), tempo did not predict IMIR retrieval, but the presence of lyrics influenced IMIR duration. Taken together, our findings suggest that IMIR is an idiosyncratic experience primed by the music-listening habits and music engagement of the individual.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44711876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Effect of Subjective Fatigue on Auditory Processing in Musicians and Nonmusicians 主观疲劳对音乐家和非音乐家听觉加工的影响
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.309
Saransh Jain, N. P. Nataraja, V. Narne
{"title":"The Effect of Subjective Fatigue on Auditory Processing in Musicians and Nonmusicians","authors":"Saransh Jain, N. P. Nataraja, V. Narne","doi":"10.1525/mp.2022.39.3.309","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.309","url":null,"abstract":"We assessed fatigue's effect on temporal resolution and speech perception in noise abilities in trained instrumental musicians. In a pretest-posttest quasiexperimental research design, trained instrumental musicians (n = 39) and theater artists as nonmusicians (n = 37) participated. Fatigue was measured using a visual analog scale (VAS) under eight fatigue categories. The temporal release of masking measured the temporal resolution, and auditory stream segregation assessed speech perception in noise. Entire testing was carried out at two time-points: before and after rehearsal. Each participant rehearsed for five to six hours: musicians playing musical instruments and theater artists conducted stage practice. The results revealed significantly lower VAS scores for both musicians and nonmusicians after rehearsal, indicating that both musicians and nonmusicians were fatigued after rehearsal. The musicians had higher scores for temporal release of masking and lower scores for auditory stream segregation abilities than nonmusicians in the pre-fatigue condition, indicating musicians’ edge in auditory processing abilities. However, no such differences in the scores of musicians and nonmusicians were observed in the post-fatigue testing. The results were inferred as the music training related advantage in temporal resolution, and speech perception in noise might have been reduced due to fatigue. In the end, we recommend that musicians consider fatigue a significant factor, as it might affect their performance in auditory processing tasks. Future researchers must also consider fatigue as a variable while measuring auditory processing in musicians. However, we restricted the auditory processing to temporal resolution and speech perception in noise only. Generalizing these results to other auditory processes requires further investigation.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48938598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beat Perception and Production in Musicians and Dancers 音乐家和舞者的节拍感知与创作
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.229
Tram T N Nguyen, Riya Sidhu, J. Everling, Miranda C. Wickett, A. Gibbings, Jessica A. Grahn
{"title":"Beat Perception and Production in Musicians and Dancers","authors":"Tram T N Nguyen, Riya Sidhu, J. Everling, Miranda C. Wickett, A. Gibbings, Jessica A. Grahn","doi":"10.1525/mp.2022.39.3.229","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.229","url":null,"abstract":"The ability to perceive and produce a beat is believed to be universal in humans, but individual ability varies. The current study examined four factors that may influence beat perception and production capacity: 1) expertise: music or dance, 2) training style: percussive or nonpercussive, 3) stimulus modality: auditory or visual, and 4) movement type: finger-tap or whole-body bounce. Experiment 1 examined how expertise and training style influenced beat perception and production performance using an auditory beat perception task and a finger-tapping beat production task. Experiment 2 used a similar sample with an audiovisual variant of the beat perception task, and a standing knee-bend (bounce) beat production task to assess whole-body movement. The data showed that: 1) musicians were more accurate in a finger-tapping beat synchronization task compared to dancers and controls, 2) training style did not significantly influence beat perception and production, 3) visual beat information did not benefit any group, and 4) beat synchronization in a full-body movement task was comparable for musicians and dancers; both groups outperformed controls. The current study suggests that the type of task and measured response interacts with expertise, and that expertise effects may be masked by selection of nonoptimal response types.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45121364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Violinists Employ More Expressive Gesture and Timing Around Global Musical Resolutions 小提琴家使用更多的表达手势和时间围绕全球音乐决议
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.268
Aditya Chander, Madeline Huberth, S. Davis, Samantha Silverstein, T. Fujioka
{"title":"Violinists Employ More Expressive Gesture and Timing Around Global Musical Resolutions","authors":"Aditya Chander, Madeline Huberth, S. Davis, Samantha Silverstein, T. Fujioka","doi":"10.1525/mp.2022.39.3.268","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.268","url":null,"abstract":"Performers express musical structure using variations in dynamics, timbre, timing, and physical gesture. Previous research on instrumental performance of Western classical music has identified increased nontechnical motion (movement considered supplementary to producing sound) and ritardando at cadences. Cadences typically provide resolution to built-up tension at differing levels of importance according to the hierarchical structure of music. Thus, we hypothesized that performers would embody these differences by employing nontechnical motion and rubato, even when not explicitly asked to express them. Expert violinists performed the Allemande from Bach’s Flute Partita for motion capture and audio recordings in a standing position, then we examined nontechnical motion and rubato in four cadential excerpts (two locally important, two globally important) and four noncadential excerpts. Each excerpt was segmented into the buildup to and departure from the dominant-tonic progression. Increased ritardando as well as nontechnical motion such as side-to-side whole-body swaying and torso rotation in cadential excerpts were found compared to noncadential excerpts. Moreover, violinists used more nontechnical motion and ritardando in the departure segments of the global cadences, while the buildups also showed the global-local contrast. Our results extend previous findings on the expression of cadences by highlighting the hierarchical nature of embodied musical resolution.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44110503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Roles of Absolute Pitch and Timbre in Plink Perception 绝对音高和音色在Plink知觉中的作用
IF 2.3 2区 心理学
Music Perception Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.289
Rebecca N. Faubion-Trejo, James T. Mantell
{"title":"The Roles of Absolute Pitch and Timbre in Plink Perception","authors":"Rebecca N. Faubion-Trejo, James T. Mantell","doi":"10.1525/mp.2022.39.3.289","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.289","url":null,"abstract":"Listeners can recognize musical excerpts less than one second in duration (plinks). We investigated the roles of timbre and implicit absolute pitch for plink identification, and the time course associated with processing these cues, by measuring listeners’ recognition, response time, and recall of original, mistuned, reversed, and temporally shuffled plinks that were extracted from popular song recordings. We hypothesized that performance would be best for the original plinks because their acoustic contents were encoded in long-term memory, but that listeners would also be able to identify the manipulated plinks by extracting dynamic and average spectral content. In accordance with our hypotheses, participants responded most rapidly and accurately for the original plinks, although notably, were capable of recognition and recall across all conditions. Our observation of plink recall in the shuffled condition suggests that temporal orderliness is not necessary for plink perception and instead provides evidence for the role of average spectral content. We interpret our results to suggest that listeners process acoustic absolute pitch and timbre information to identify plinks and we explore the implications for local and global acoustic feature processing.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49584880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信