{"title":"Knowledge revision processes during reading: How pictures influence the activation of outdated information.","authors":"Pauline Frick, Panayiota Kendeou, Anne Schüler","doi":"10.3758/s13421-024-01586-9","DOIUrl":"https://doi.org/10.3758/s13421-024-01586-9","url":null,"abstract":"<p><p>Outdated information (i.e., information that is not or no longer accurate) continues to be automatically activated during reading and can hinder learning processes. Thus, it is important to understand which factors influence the activation of outdated information and, therefore, knowledge revision processes. In three online experiments, we investigated how illustrating updated or outdated information via pictures influences the activation of outdated information. In Experiments 1 (N = 421) and 2 (N = 422), we varied whether participants read texts containing outdated information that was later updated (outdated text) or texts containing only updated information (consistent text). In addition, the updated information was or was not illustrated by a picture. In Experiment 3 (N = 441), participants read outdated texts, and we varied whether the outdated, the updated, or no information was illustrated. In all experiments, we measured reading times for a target sentence referring to the updated information and the sentence following the target sentence. Results showed that target sentences' reading times were faster for illustrated than for non-illustrated texts (Experiments 1 and 2). Moreover, reading times were longer when the outdated information was illustrated than when the updated information was illustrated (Experiment 3). These results suggest that pictures overall facilitate cognitive processes during reading, but their content matters: Pictures showing the updated information had a greater impact on reading times than pictures showing the outdated information. The results extend existing theories on knowledge revision but also reading comprehension, by demonstrating how pictures might influence cognitive processes during reading.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141237649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sabrina Berres, Edgar Erdfelder, Beatrice G Kuhlmann
{"title":"Does sleep benefit source memory? Investigating 12-h retention intervals with a multinomial modeling approach.","authors":"Sabrina Berres, Edgar Erdfelder, Beatrice G Kuhlmann","doi":"10.3758/s13421-024-01579-8","DOIUrl":"https://doi.org/10.3758/s13421-024-01579-8","url":null,"abstract":"<p><p>For retention intervals of up to 12 h, the active systems consolidation hypothesis predicts that sleep compared to wakefulness strengthens the context binding of memories previously established during encoding. Sleep should thus improve source memory. By comparing retention intervals filled with natural night sleep versus daytime wakefulness, we tested this prediction in two online source-monitoring experiments using intentionally learned pictures as items and incidentally learned screen positions and frame colors as source dimensions. In Experiment 1, we examined source memory by varying the spatial position of pictures on the computer screen. Multinomial modeling analyses revealed a significant sleep benefit in source memory. In Experiment 2, we manipulated both the spatial position and the frame color of pictures orthogonally to investigate source memory for two different source dimensions at the same time, also allowing exploration of bound memory for both source dimensions. The sleep benefit on spatial source memory replicated. In contrast, no source memory sleep benefit was observed for either frame color or bound memory of both source dimensions, probably as a consequence of a floor effect in incidental encoding of color associations. In sum, the results of both experiments show that sleep within a 12-h retention interval improves source memory for spatial positions, supporting the prediction of the active systems consolidation hypothesis. However, additional research is required to clarify the impact of sleep on source memory for other context features and bound memories of multiple source dimensions.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141237450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shea E Duarte, Andrew P Yonelinas, Simona Ghetti, Joy J Geng
{"title":"Multisensory processing impacts memory for objects and their sources.","authors":"Shea E Duarte, Andrew P Yonelinas, Simona Ghetti, Joy J Geng","doi":"10.3758/s13421-024-01592-x","DOIUrl":"https://doi.org/10.3758/s13421-024-01592-x","url":null,"abstract":"<p><p>Multisensory object processing improves recognition memory for individual objects, but its impact on memory for neighboring visual objects and scene context remains largely unknown. It is therefore unclear how multisensory processing impacts episodic memory for information outside of the object itself. We conducted three experiments to test the prediction that the presence of audiovisual objects at encoding would improve memory for nearby visual objects, and improve memory for the environmental context in which they occurred. In Experiments 1a and 1b, participants viewed audiovisual-visual object pairs or visual-visual object pairs with a control sound during encoding and were subsequently tested on their memory for each object individually. In Experiment 2, objects were paired with semantically congruent or meaningless control sounds and appeared within four different scene environments. Memory for the environment was tested. Results from Experiments 1a and 1b showed that encoding a congruent audiovisual object did not significantly benefit memory for neighboring visual objects, but Experiment 2 showed that encoding a congruent audiovisual object did improve memory for the environments in which those objects were encoded. These findings suggest that multisensory processing can influence memory beyond the objects themselves and that it has a unique role in episodic memory formation. This is particularly important for understanding how memories and associations are formed in real-world situations, in which objects and their surroundings are often multimodal.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141237726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using drawings and deep neural networks to characterize the building blocks of human visual similarity.","authors":"Kushin Mukherjee, Timothy T Rogers","doi":"10.3758/s13421-024-01580-1","DOIUrl":"https://doi.org/10.3758/s13421-024-01580-1","url":null,"abstract":"<p><p>Early in life and without special training, human beings discern resemblance between abstract visual stimuli, such as drawings, and the real-world objects they represent. We used this capacity for visual abstraction as a tool for evaluating deep neural networks (DNNs) as models of human visual perception. Contrasting five contemporary DNNs, we evaluated how well each explains human similarity judgments among line drawings of recognizable and novel objects. For object sketches, human judgments were dominated by semantic category information; DNN representations contributed little additional information. In contrast, such features explained significant unique variance perceived similarity of abstract drawings. In both cases, a vision transformer trained to blend representations of images and their natural language descriptions showed the greatest ability to explain human perceptual similarity-an observation consistent with contemporary views of semantic representation and processing in the human mind and brain. Together, the results suggest that the building blocks of visual similarity may arise within systems that learn to use visual information, not for specific classification, but in service of generating semantic representations of objects.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141176688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tanmay Anand, Karalyn Patterson, James B Rowe, Thomas E Cope
{"title":"Drawing from name in semantic dementia reveals graded object knowledge representations in anterior temporal lobe.","authors":"Tanmay Anand, Karalyn Patterson, James B Rowe, Thomas E Cope","doi":"10.3758/s13421-024-01578-9","DOIUrl":"https://doi.org/10.3758/s13421-024-01578-9","url":null,"abstract":"<p><p>Semantic dementia (SD) is characterized by progressive impairment in conceptual knowledge due to anterior temporal lobe (ATL) neurodegeneration. Extended neuropsychological assessments can quantitatively demonstrate the semantic impairment, but this graded loss of knowledge can also be readily observed in the qualitative observation of patients' recall of single concepts. Here, we present the results of a simple task of object drawing-from-name, by patients with SD (N = 19), who have isolated atrophy of the ATL bilaterally. Both cross-sectionally and longitudinally, patient drawings demonstrated a pattern of degradation in which rare and distinctive features (such as the hump on a camel) were lost earliest in disease course, and there was an increase in the intrusion of prototypical features (such as the typical small ears of most mammals on an elephant) with more advanced disease. Crucially, patient drawings showed a continuum of conceptual knowledge loss rather than a binary 'present' or 'absent' state. Overall, we demonstrate that qualitative evaluation of line drawings of animals and objects provides fascinating insights into the transmodal semantic deficit in SD. Our results are consistent with a distributed-plus-hub model of semantic memory. The graded nature of the deficit in semantic performance observed in our subset of longitudinally observed patients suggests that the temporal lobe binds feature-based semantic attributes in its central convergence zone.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Violet A Brown, Katrina Sewell, Jed Villanueva, Julia F Strand
{"title":"Noisy speech impairs retention of previously heard information only at short time scales.","authors":"Violet A Brown, Katrina Sewell, Jed Villanueva, Julia F Strand","doi":"10.3758/s13421-024-01583-y","DOIUrl":"https://doi.org/10.3758/s13421-024-01583-y","url":null,"abstract":"<p><p>When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Brown, Katrina Sewell, Jed Villanueva, Julia F. Strand
{"title":"Noisy speech impairs retention of previously heard information only at short time scales.","authors":"V. Brown, Katrina Sewell, Jed Villanueva, Julia F. Strand","doi":"10.3758/s13421-024-01583-y","DOIUrl":"https://doi.org/10.3758/s13421-024-01583-y","url":null,"abstract":"","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140962691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaichi Yanaoka, Félice van 't Wout, Satoru Saito, Christopher Jarrold
{"title":"No evidence for cross-paradigm transfer of abstract task knowledge in adults and school-aged children.","authors":"Kaichi Yanaoka, Félice van 't Wout, Satoru Saito, Christopher Jarrold","doi":"10.3758/s13421-024-01581-0","DOIUrl":"https://doi.org/10.3758/s13421-024-01581-0","url":null,"abstract":"<p><p>Cognitive control is a hallmark of human cognition. A large number of studies have focused on the plasticity of cognitive control and examined how repeated task experience leads to the improvement of cognitive control in novel task environments. However, it has been demonstrated that training-induced changes are very selective and that transfer occurs within one task paradigm but not across different task paradigms. The current study tested the possibility that cross-paradigm transfer would occur if a common cognitive control strategy is employed across different task paradigms. Specifically, we examined whether prior experience of using reactive control in one task paradigm (i.e., either the cued task-switching paradigm or the AX-CPT) makes adults (N = 137) and 9- to 10-year-olds (N = 126) respond in a reactive way in a subsequent condition of another task paradigm in which proactive control could have been engaged. Bayesian generalized mixed-effects models revealed clear evidence of an absence of cross-paradigm transfer of reactive control in both adults and school-aged children. Based on these findings, we discuss to what extent learned control could be transferred across different task contexts and the task-specificity of proactive/reactive control strategies.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140945130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proactive interference of visual working memory chunks implicates long-term memory.","authors":"Logan Doyle, Susanne Ferber, Katherine D Duncan","doi":"10.3758/s13421-024-01585-w","DOIUrl":"https://doi.org/10.3758/s13421-024-01585-w","url":null,"abstract":"<p><p>Visual working memory (VWM) is a limited cognitive resource that can be functionally expanded through chunking (Miller, 1956). For example, participants can hold an increasing number of colours in mind as they learn to chunk reliably paired combinations (Brady et al., 2009). We investigated whether this benefit is mediated through the in situ compression of VWM representations (Brady et al., 2009) or the offloading of chunks to long-term memory (LTM; Huang & Awh, 2018; Ngiam et al., 2019) by asking if a vulnerability of LTM - proactive interference - influences VWM performance. We adapted previous designs using deterministic (Experiment 1, N = 60) and probabilistic pairings (Experiments 2 and 3, N = 64 and 80, respectively), to include colour pairings that swapped in sequence along with pairings that were consistent in sequence. Generally, participants reported colours from consistent pairs more accurately than from swapping pairs, which we designed to drive interference in LTM (Experiments 1 and 2). The error profiles also pointed to proactive interference between swapping pairs in all three experiments. Moreover, participants who had explicit awareness of frequent colour pairings had higher VWM accuracy, and their errors reflected more proactive interference than their unaware counterparts (Experiment 3). This pattern of long-term proactive interference in a VWM task lends support for accounts of VWM chunking that propose LTM offloading.</p>","PeriodicalId":48398,"journal":{"name":"Memory & Cognition","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}