CognitionPub Date : 2024-09-12DOI: 10.1016/j.cognition.2024.105933
Yan Chen , Adam Tierney , Peter Q. Pfordresher
{"title":"Speech-to-song transformation in perception and production","authors":"Yan Chen , Adam Tierney , Peter Q. Pfordresher","doi":"10.1016/j.cognition.2024.105933","DOIUrl":"10.1016/j.cognition.2024.105933","url":null,"abstract":"<div><p>The speech-to-song transformation is an illusion in which certain spoken phrases are perceived as more song-like after being repeated several times. The present study addresses whether this perceptual transformation leads to a corresponding change in how accurately participants imitate pitch/time patterns in speech. We used illusion-inducing (illusion stimuli) and non-inducing (control stimuli) spoken phrases as stimuli. In each trial, one stimulus was presented eight times in succession. Participants were asked to reproduce the phrase and rate how music-like the phrase sounded after the first and final (eighth) repetitions. The ratings of illusion stimuli reflected more song-like perception after the final repetition than the first repetition, but the ratings of control stimuli did not change over repetitions. The results from imitative production mirrored the perceptual effects: pitch matching of illusion stimuli improved from the first to the final repetition, but pitch matching of control stimuli did not improve. These findings suggest a consistent pattern of speech-to-song transformation in both perception and production, suggesting that distinctions between music and language may be more malleable than originally thought both in perception and production.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105933"},"PeriodicalIF":2.8,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-09-10DOI: 10.1016/j.cognition.2024.105941
Kevin D. Wilson, May Lonergan , Claire Nagel , Brian P. Meier
{"title":"Does reductive information increase satisfaction with scientific explanations? Three preregistered tests of the reductive allure effect","authors":"Kevin D. Wilson, May Lonergan , Claire Nagel , Brian P. Meier","doi":"10.1016/j.cognition.2024.105941","DOIUrl":"10.1016/j.cognition.2024.105941","url":null,"abstract":"<div><p>Understanding information processing biases is critical for improving scientific literacy. Research suggests that people rate scientific explanations with reductive jargon (e.g., irrelevant chemistry jargon in the explanation of a biological phenomenon) as better than those without – a phenomenon known as the reductive allure (RA) effect. Here, however, in three preregistered online experiments, we were unable to replicate this reductive allure effect using similar (and in some cases identical) materials and procedures to the original demonstration of the phenomena. Our results suggest that text-based RA effects may not be as strong as previously thought and are possibly changing over time.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105941"},"PeriodicalIF":2.8,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-09-10DOI: 10.1016/j.cognition.2024.105935
Calum Hartley , Lucy Colbourne , Naziya Lokat , Rachel Kelly , John J. Shaw
{"title":"Investigating children's valuation of authentic and inauthentic objects: Visible object properties vs. invisible ownership history","authors":"Calum Hartley , Lucy Colbourne , Naziya Lokat , Rachel Kelly , John J. Shaw","doi":"10.1016/j.cognition.2024.105935","DOIUrl":"10.1016/j.cognition.2024.105935","url":null,"abstract":"<div><p>In human culture, an object's value is influenced by tangible properties (e.g. visual desirability and constituent materials) <em>and</em> intangible ownership history (e.g. authentic objects owned by celebrities are often worth more than similar inauthentic objects). Children are sensitive to both of these factors as independent determinants of value, but research has yet to elucidate how they interact. Here, we investigate whether children aged 5–11 years consider object properties or authentic ownership history to be the greater determinant of value and examine how their object valuations are influenced by owners' characteristics. In Study 1, visually desirable and undesirable items belonging to ‘famously good’ owners received higher valuations than similar items belonging to non-famous owners, whereas desirable items belonging to ‘famously bad’ owners received significantly lower values. In Study 2, children considered items made from cheap materials belonging to famously good owners, but not famously bad owners, to be as valuable as similar items made from expensive materials belonging to non-famous owners. In Study 3, physical contact with a famously bad owner had a detrimental impact on value, but worn and unworn objects belonging to famously good owners did not significantly differ in value. Across studies, we documented evidence that children's sensitivity to authentic ownership history and physical contact as determinants of value increases with age. Together, these findings demonstrate that children's valuation of ownership history relative to object properties depends on the owner's ‘essence’, and their sensitivity to owner contact as a mediator of value indicates awareness of ‘magical contagion’.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105935"},"PeriodicalIF":2.8,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S001002772400221X/pdfft?md5=9944150959e06cdbc574ec0a4f778f4f&pid=1-s2.0-S001002772400221X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-09-03DOI: 10.1016/j.cognition.2024.105938
Cheongil Kim , Sang Chul Chong
{"title":"Metacognition of perceptual resolution across and around the visual field","authors":"Cheongil Kim , Sang Chul Chong","doi":"10.1016/j.cognition.2024.105938","DOIUrl":"10.1016/j.cognition.2024.105938","url":null,"abstract":"<div><p>Do people have accurate metacognition of non-uniformities in perceptual resolution across (i.e., eccentricity) and around (i.e., polar angle) the visual field? Despite its theoretical and practical importance, this question has not yet been empirically tested. This study investigated metacognition of perceptual resolution by guessing patterns during a degradation (i.e., loss of high spatial frequencies) localization task. Participants localized the degraded face among the nine faces that simultaneously appeared throughout the visual field: fovea (fixation at the center of the screen), parafovea (left, right, above, and below fixation at 4° eccentricity), and periphery (left, right, above, and below fixation at 10° eccentricity). We presumed that if participants had accurate metacognition, in the absence of a degraded face, they would exhibit compensatory guessing patterns based on counterfactual reasoning (“The degraded face must have been presented at locations with lower perceptual resolution, because if it were presented at locations with higher perceptual resolution, I would have easily detected it.”), meaning that we would expect more guess responses for locations with lower perceptual resolution. In two experiments, we observed guessing patterns that suggest that people can monitor non-uniformities in perceptual resolution across, but not around, the visual field during tasks, indicating partial in-the-moment metacognition. Additionally, we found that global explicit knowledge of perceptual resolution is not sufficient to guide in-the-moment metacognition during tasks, which suggests a dissociation between local and global metacognition.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105938"},"PeriodicalIF":2.8,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142130178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-09-02DOI: 10.1016/j.cognition.2024.105940
Haoyang Yu , Irene Sperandio , Lihong Chen
{"title":"Simple actions modulate context-dependent visual size perception at late processing stages","authors":"Haoyang Yu , Irene Sperandio , Lihong Chen","doi":"10.1016/j.cognition.2024.105940","DOIUrl":"10.1016/j.cognition.2024.105940","url":null,"abstract":"<div><p>A simple button press towards a prime stimulus enhances subsequent visual search for objects that match the prime. The present study investigated whether this action effect is a general phenomenon across different task domains, and the underlying neural mechanisms. The action effect was measured in an unspeeded size-matching task, with the presentation of the central target and the surrounding inducers of the Ebbinghaus illusion together to one eye or separately to each eye, and when repetitive TMS was applied over right primary motor cortex (M1). The results showed that a prior key-press significantly reduced the illusion effect compared to passive viewing. Notably, the action effect persisted with dichoptic presentation of the Ebbinghaus configuration, but disappeared with the right M1 disruption. These results suggest that action guides visual perception to influence human behavior, which mainly affects the late visual processing stage and probably relies on feedback projections from the motor cortex.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105940"},"PeriodicalIF":2.8,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724002269/pdfft?md5=89837510b0c8abc6b41596937edd0bf0&pid=1-s2.0-S0010027724002269-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-08-31DOI: 10.1016/j.cognition.2024.105934
Andreea Zaman , Roni Setton , Caroline Catmur , Charlotte Russell
{"title":"What is autonoetic consciousness? Examining what underlies subjective experience in memory and future thinking","authors":"Andreea Zaman , Roni Setton , Caroline Catmur , Charlotte Russell","doi":"10.1016/j.cognition.2024.105934","DOIUrl":"10.1016/j.cognition.2024.105934","url":null,"abstract":"<div><p>Autonoetic consciousness is the awareness that an event we remember is one that we ourselves experienced. It is a defining feature of our subjective experience of remembering and imagining future events. Given its subjective nature, there is ongoing debate about how to measure it. Our goal was to develop a framework to identify cognitive markers of autonoetic consciousness. Across two studies (<em>N</em> = 342) we asked young, healthy participants to provide written descriptions of two autobiographical memories, two plausible future events, and an experimentally encoded video. Participants then rated their subjective experience during remembering and imagining. Exploratory Factor Analysis of this data uncovered the latent variables underlying autonoetic consciousness across these different events. In contrast to work that emphasizes the distinction between Remember and Know as being key to autonoetic consciousness, Re-experiencing, and Pre-experiencing for future events, were consistently identified as core markers of autonoetic consciousness. This was alongside Mental Time Travel in all types of memory events, but not for imagining the future. In addition, our factor analysis allows us to demonstrate directly - for the first time - the features of mental imagery associated with the sense of autonoetic consciousness in autobiographical memory; vivid, visual imagery from a first-person perspective. Finally, with regression analysis, the emergent factor structure of autonoetic consciousness was able to predict the richness of autobiographical memory texts, but not of episodic recall of the encoded video. This work provides a novel way to assess autonoetic consciousness, illustrates how autonoetic consciousness manifests differently in memory and imagination and defines the mental representations intrinsic to this process.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105934"},"PeriodicalIF":2.8,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-08-31DOI: 10.1016/j.cognition.2024.105930
Amber M. Giacona , Brynn N. Schuetter , Lana E. Dranow , Christopher S. Peters , James Michael Lampinen
{"title":"Thinking outside the red box: Does the simultaneous Showup distinguish between filler siphoning and diagnostic feature detection accounts of lineup/Showup differences?","authors":"Amber M. Giacona , Brynn N. Schuetter , Lana E. Dranow , Christopher S. Peters , James Michael Lampinen","doi":"10.1016/j.cognition.2024.105930","DOIUrl":"10.1016/j.cognition.2024.105930","url":null,"abstract":"<div><p>Lineups are considered a superior method of identification to showups, but why is contested. There are two main theories: diagnostic feature detection theory, which holds that surrounding the suspect with fillers causes the eyewitness to focus on the features that are most diagnostic, and differential filler siphoning theory that claims that the fillers draw incorrect choices away from the suspect. <span><span>Colloff and Wixted (2020)</span></span> created a novel identification task, called a simultaneous showup, designed to prevent filler siphoning, while still allowing comparison to occur between members of the array. However, even in the simultaneous showup, it is possible that covert filler siphoning occurs. In Experiment 1, we replicated the simultaneous showup condition and also asked participants if the other photos affected their decision making; we found evidence that participants self-reported both diagnostic feature detection and covert filler siphoning. In Experiment 2, we replicated <span><span>Colloff and Wixted (2020, Experiment 3)</span></span> main findings. Additionally, we found that participants self-reported both diagnostic feature detection and covert filler siphoning. This led us to conclude that the simultaneous showup procedure could not fully exclude covert filler siphoning from occurring.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105930"},"PeriodicalIF":2.8,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-08-31DOI: 10.1016/j.cognition.2024.105936
Kosuke Motoki , Charles Spence , Carlos Velasco
{"title":"Colour/shape-taste correspondences across three languages in ChatGPT","authors":"Kosuke Motoki , Charles Spence , Carlos Velasco","doi":"10.1016/j.cognition.2024.105936","DOIUrl":"10.1016/j.cognition.2024.105936","url":null,"abstract":"<div><p>Crossmodal correspondences, the tendency for a sensory feature / attribute in one sensory modality (either physically present or merely imagined), to be associated with a sensory feature in another sensory modality, have been studied extensively, revealing consistent patterns, such as sweet tastes being associated with pink colours and round shapes across languages. The present research explores whether such correspondences are captured by ChatGPT, a large language model developed by OpenAI. Across twelve studies, this research investigates colour/shapes-taste crossmodal correspondences in ChatGPT-3.5 and -4o, focusing on associations between shapes/colours and the five basic tastes across three languages (English, Japanese, and Spanish). Studies 1A-F examined taste-shape associations, using prompts in three languages to assess ChatGPT's association of round and angular shapes with the five basic tastes. The results indicated significant, consistent, associations between shape and taste, with, for example, round shapes strongly associated with sweet/umami tastes and angular shapes with bitter/salty/sour tastes. The magnitude of shape-taste matching appears to be greater in ChatGPT-4o than in ChatGPT-3.5, and ChatGPT prompted in English and Spanish than ChatGPT prompted in Japanese. Studies 2A-F focused on colour-taste correspondences, using ChatGPT to assess associations between eleven colours and the five basic tastes. The results indicated that ChatGPT-4o, but not ChatGPT-3.5, generally replicates the patterns of colour-taste correspondences that have previously been observed in human participants. Specifically, ChatGPT-4o associates sweet tastes with pink, sour with yellow, salty with white/blue, bitter with black, and umami with red across languages. However, the magnitude/similarity of shape/colour-taste matching observed in ChatGPT-4o appears to be more pronounced (i.e., having little variance, large mean difference), which does not adequately reflect the subtle nuances typically seen in human shape/colour-taste correspondences. These findings suggest that ChatGPT captures colour/shapes-taste correspondences, with language- and GPT version-specific variations, albeit with some differences when compared to previous studies involving human participants. These findings contribute valuable knowledge to the field of crossmodal correspondences, explore the possibility of generative AI that resembles human perceptual systems and cognition across languages, and provide insight into the development and evolution of generative AI systems that capture human crossmodal correspondences.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105936"},"PeriodicalIF":2.8,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724002221/pdfft?md5=4c3a2f4f50d82967a4e5711cdc897d56&pid=1-s2.0-S0010027724002221-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-08-31DOI: 10.1016/j.cognition.2024.105932
Lucie Wolters , Ori Lavi-Rotbain , Inbal Arnon
{"title":"Zipfian distributions facilitate children's learning of novel word-referent mappings","authors":"Lucie Wolters , Ori Lavi-Rotbain , Inbal Arnon","doi":"10.1016/j.cognition.2024.105932","DOIUrl":"10.1016/j.cognition.2024.105932","url":null,"abstract":"<div><p>The word-frequency distributions children hear during language learning are highly skewed (Zipfian). Previous studies suggest that such skewed environments confer a learnability advantage in tasks that require the learner to discover the units that have to be learned, as in word-segmentation or cross-situational learning. This facilitative effect has been attributed to contextual facilitation from high frequency items in learning lower frequency items, and to better learning under the increased predictability (lower entropy) of skewed distributions. Here, we ask whether Zipfian distributions facilitate learning beyond the discovery of units, as expected under the predictability account. We tested children's learning of novel word-referent mappings in a learning task where each mapping was presented in isolation during training, and did not need to be dicovered. We compared learning in a uniform environment to two skewed environments with different entropy levels. Children's learning was overall better in the two skewed environments, even for low frequency items. These results extend the facilitative effect of Zipfian distributions to additional learning tasks and show they can facilitate language learning beyond the discovery of units.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105932"},"PeriodicalIF":2.8,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CognitionPub Date : 2024-08-30DOI: 10.1016/j.cognition.2024.105874
Maddie Cusimano , Luke B. Hewitt , Josh H. McDermott
{"title":"Listening with generative models","authors":"Maddie Cusimano , Luke B. Hewitt , Josh H. McDermott","doi":"10.1016/j.cognition.2024.105874","DOIUrl":"10.1016/j.cognition.2024.105874","url":null,"abstract":"<div><p>Perception has long been envisioned to use an internal model of the world to explain the causes of sensory signals. However, such accounts have historically not been testable, typically requiring intractable search through the space of possible explanations. Using auditory scenes as a case study, we leveraged contemporary computational tools to infer explanations of sounds in a candidate internal generative model of the auditory world (ecologically inspired audio synthesizers). Model inferences accounted for many classic illusions. Unlike traditional accounts of auditory illusions, the model is applicable to any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable model structure enabled ‘rich falsification’, revealing additional assumptions about sound generation needed to account for perception. The results show how generative models can account for the perception of both classic illusions and everyday sensory signals, and illustrate the opportunities and challenges involved in incorporating them into theories of perception.</p></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"253 ","pages":"Article 105874"},"PeriodicalIF":2.8,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010027724001604/pdfft?md5=12a6854cd3586854a262c85e80572130&pid=1-s2.0-S0010027724001604-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}