{"title":"在音乐鸡尾酒会上,注意力、音乐性和熟悉度塑造了大脑皮层的言语追踪","authors":"Jane A. Brown , Gavin M. Bidelman","doi":"10.1016/j.bandl.2025.105581","DOIUrl":null,"url":null,"abstract":"<div><div>The “cocktail party problem” challenges our ability to understand speech in noisy environments and often includes background music. Here, we explored the role of background music in speech-in-noise listening. Participants listened to an audiobook in familiar and unfamiliar music while tracking keywords in either speech or song lyrics. We used EEG to measure neural tracking of the audiobook. When speech was masked by music, the modeled temporal response function (TRF) peak latency at 50 ms (P1<sub>TRF</sub>) was prolonged compared to unmasked. Additionally, P1<sub>TRF</sub> amplitude was larger in unfamiliar background music, suggesting improved speech tracking. We observed prolonged latencies at 100 ms (N1<sub>TRF</sub>) when speech was not the attended stimulus, though only in less musical listeners. Our results suggest early neural representations of speech are stronger with both attention and concurrent unfamiliar music, indicating familiar music is more distracting. One’s ability to perceptually filter “musical noise” at the cocktail party also depends on objective musical listening abilities.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"266 ","pages":"Article 105581"},"PeriodicalIF":2.1000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attention, musicality, and familiarity shape cortical speech tracking at the musical cocktail party\",\"authors\":\"Jane A. Brown , Gavin M. Bidelman\",\"doi\":\"10.1016/j.bandl.2025.105581\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The “cocktail party problem” challenges our ability to understand speech in noisy environments and often includes background music. Here, we explored the role of background music in speech-in-noise listening. Participants listened to an audiobook in familiar and unfamiliar music while tracking keywords in either speech or song lyrics. We used EEG to measure neural tracking of the audiobook. When speech was masked by music, the modeled temporal response function (TRF) peak latency at 50 ms (P1<sub>TRF</sub>) was prolonged compared to unmasked. Additionally, P1<sub>TRF</sub> amplitude was larger in unfamiliar background music, suggesting improved speech tracking. We observed prolonged latencies at 100 ms (N1<sub>TRF</sub>) when speech was not the attended stimulus, though only in less musical listeners. Our results suggest early neural representations of speech are stronger with both attention and concurrent unfamiliar music, indicating familiar music is more distracting. One’s ability to perceptually filter “musical noise” at the cocktail party also depends on objective musical listening abilities.</div></div>\",\"PeriodicalId\":55330,\"journal\":{\"name\":\"Brain and Language\",\"volume\":\"266 \",\"pages\":\"Article 105581\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Brain and Language\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0093934X25000501\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain and Language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0093934X25000501","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
Attention, musicality, and familiarity shape cortical speech tracking at the musical cocktail party
The “cocktail party problem” challenges our ability to understand speech in noisy environments and often includes background music. Here, we explored the role of background music in speech-in-noise listening. Participants listened to an audiobook in familiar and unfamiliar music while tracking keywords in either speech or song lyrics. We used EEG to measure neural tracking of the audiobook. When speech was masked by music, the modeled temporal response function (TRF) peak latency at 50 ms (P1TRF) was prolonged compared to unmasked. Additionally, P1TRF amplitude was larger in unfamiliar background music, suggesting improved speech tracking. We observed prolonged latencies at 100 ms (N1TRF) when speech was not the attended stimulus, though only in less musical listeners. Our results suggest early neural representations of speech are stronger with both attention and concurrent unfamiliar music, indicating familiar music is more distracting. One’s ability to perceptually filter “musical noise” at the cocktail party also depends on objective musical listening abilities.
期刊介绍:
An interdisciplinary journal, Brain and Language publishes articles that elucidate the complex relationships among language, brain, and behavior. The journal covers the large variety of modern techniques in cognitive neuroscience, including functional and structural brain imaging, electrophysiology, cellular and molecular neurobiology, genetics, lesion-based approaches, and computational modeling. All articles must relate to human language and be relevant to the understanding of its neurobiological and neurocognitive bases. Published articles in the journal are expected to have significant theoretical novelty and/or practical implications, and use perspectives and methods from psychology, linguistics, and neuroscience along with brain data and brain measures.