Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231184982
Brian C J Moore, Josef Schlittenlacher
{"title":"Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks.","authors":"Brian C J Moore, Josef Schlittenlacher","doi":"10.1177/23312165231184982","DOIUrl":"10.1177/23312165231184982","url":null,"abstract":"<p><p>The diagnosis of noise-induced hearing loss (NIHL) is based on three requirements: a history of exposure to noise with the potential to cause hearing loss; the absence of known causes of hearing loss other than noise exposure; and the presence of certain features in the audiogram. All current methods for diagnosing NIHL have involved examination of the typical features of the audiograms of noise-exposed individuals and the formulation of quantitative rules for the identification of those features. This article describes an alternative approach based on the use of multilayer perceptrons (MLPs). The approach was applied to databases containing the ages and audiograms of individuals claiming compensation for NIHL sustained during military service (M-NIHL), who were assumed mostly to have M-NIHL, and control databases with no known exposure to intense sounds. The MLPs were trained so as to classify individuals as belonging to the exposed or control group based on their audiograms and ages, thereby automatically identifying the features of the audiogram that provide optimal classification. Two databases (noise exposed and nonexposed) were used for training and validation of the MLPs and two independent databases were used for evaluation and further analyses. The best-performing MLP was one trained to identify whether or not an individual had M-NIHL based on age and the audiogram for both ears. This achieved a sensitivity of 0.986 and a specificity of 0.902, giving an overall accuracy markedly higher than for previous methods.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231184982"},"PeriodicalIF":2.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10408324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10318915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165221076681
Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace
{"title":"Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions.","authors":"Iliza M Butera, Ryan A Stevenson, René H Gifford, Mark T Wallace","doi":"10.1177/23312165221076681","DOIUrl":"10.1177/23312165221076681","url":null,"abstract":"<p><p>The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme \"ba\" dubbed onto the viseme \"ga\"), we found that 55 CI users (87%) reported a fused percept of \"da\" or \"tha\" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221076681"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/6d/d6/10.1177_23312165221076681.PMC10334005.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9763744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231173234
Emanuele Perugia, Frederic Marmel, Karolina Kluk
{"title":"Feasibility of Diagnosing Dead Regions Using Auditory Steady-State Responses to an Exponentially Amplitude Modulated Tone in Threshold Equalizing Notched Noise, Assessed Using Normal-Hearing Participants.","authors":"Emanuele Perugia, Frederic Marmel, Karolina Kluk","doi":"10.1177/23312165231173234","DOIUrl":"10.1177/23312165231173234","url":null,"abstract":"<p><p>The aim of this study was to assess feasibility of using electrophysiological auditory steady-state response (ASSR) masking for detecting dead regions (DRs). Fifteen normally hearing adults were tested using behavioral and electrophysiological tasks. In the electrophysiological task, ASSRs were recorded to a 2 kHz exponentially amplitude-modulated tone (AM2) presented within a notched threshold equalizing noise (TEN) whose center frequency (CF<sub>NOTCH</sub>) varied. We hypothesized that, in the absence of DRs, ASSR amplitudes would be largest for CF<sub>NOTCH</sub> at/or near the signal frequency. In the presence of a DR at the signal frequency, the largest ASSR amplitude would occur at a frequency (<i>f<sub>max</sub></i>) far away from the signal frequency. The AM2 and the TEN were presented at 60 and 75 dB SPL, respectively. In the behavioral task, for the same maskers as above, the masker level at which an AM and a pure tone could just be distinguished, denoted AM2ML, was determined, for low (10 dB above absolute AM2 threshold) and high (60 dB SPL) signal levels. We also hypothesized that the value of <i>f<sub>max</sub></i> would be similar for both techniques. The ASSR <i>f<sub>max</sub></i> values obtained from grand average ASSR amplitudes, but not from individual amplitudes, were consistent with our hypotheses. The agreement between the behavioral <i>f<sub>max</sub></i> and ASSR <i>f<sub>max</sub></i> was poor. The within-session ASSR-amplitude repeatability was good for AM2 alone, but poor for AM2 in notched TEN. The ASSR-amplitude variability between and within participants seems to be a major roadblock to developing our approach into an effective DR detection method.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231173234"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10336760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9775441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231188619
Ľuboš Hládek, Bernhard U Seeber
{"title":"Speech Intelligibility in Reverberation is Reduced During Self-Rotation.","authors":"Ľuboš Hládek, Bernhard U Seeber","doi":"10.1177/23312165231188619","DOIUrl":"10.1177/23312165231188619","url":null,"abstract":"<p><p>Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231188619"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9872318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231154035
Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson
{"title":"Modified T<sup>2</sup> Statistics for Improved Detection of Aided Cortical Auditory Evoked Potentials in Hearing-Impaired Infants.","authors":"Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson","doi":"10.1177/23312165231154035","DOIUrl":"10.1177/23312165231154035","url":null,"abstract":"<p><p>The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T<sup>2</sup> test, various modified q-sample statistics, and two novel variants of T<sup>2</sup> statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T<sup>2</sup> statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T<sup>2</sup> test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T<sup>2</sup> and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231154035"},"PeriodicalIF":2.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9974628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10828646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231182289
Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani
{"title":"Capturing Visual Attention With Perturbed Auditory Spatial Cues.","authors":"Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani","doi":"10.1177/23312165231182289","DOIUrl":"10.1177/23312165231182289","url":null,"abstract":"<p><p>Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (<i>N</i> = 20), unilateral CI users (<i>N</i> = 20), and individuals with uHL (<i>N</i> = 20). For comparison, we also included a group of normal-hearing (NH, <i>N</i> = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231182289"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/84/a2/10.1177_23312165231182289.PMC10467228.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165221148022
Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira
{"title":"Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users.","authors":"Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira","doi":"10.1177/23312165221148022","DOIUrl":"https://doi.org/10.1177/23312165221148022","url":null,"abstract":"<p><p>Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221148022"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a4/9b/10.1177_23312165221148022.PMC9837293.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10746839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231153280
Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt
{"title":"Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise.","authors":"Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt","doi":"10.1177/23312165231153280","DOIUrl":"https://doi.org/10.1177/23312165231153280","url":null,"abstract":"<p><p>Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231153280"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/85/b7/10.1177_23312165231153280.PMC10028670.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9514033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments.","authors":"Christian Lorenzi, Frédéric Apoux, Elie Grinfeder, Bernie Krause, Nicole Miller-Viacava, Jérôme Sueur","doi":"10.1177/23312165231212032","DOIUrl":"10.1177/23312165231212032","url":null,"abstract":"<p><p>Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes \"natural soundscapes,\" that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). <i>Frontiers in Ecology and Evolution. 10:</i> 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of \"human auditory ecology,\" focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231212032"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10658775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2023-01-01DOI: 10.1177/23312165231181757
Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche
{"title":"Grouping by Time and Pitch Facilitates Free but Not Cued Recall for Word Lists in Normally-Hearing Listeners.","authors":"Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche","doi":"10.1177/23312165231181757","DOIUrl":"https://doi.org/10.1177/23312165231181757","url":null,"abstract":"<p><p>Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231181757"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a6/25/10.1177_23312165231181757.PMC10286184.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9712047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}