Jinger Pan, Aiping Wang, Mingsha Zhang, Yiu-Kei Tsang, Ming Yan
{"title":"Printing words in alternating colors facilitates eye movements among young and older Chinese adults.","authors":"Jinger Pan, Aiping Wang, Mingsha Zhang, Yiu-Kei Tsang, Ming Yan","doi":"10.3758/s13423-024-02581-6","DOIUrl":"https://doi.org/10.3758/s13423-024-02581-6","url":null,"abstract":"<p><p>It is well known that the Chinese writing system lacks visual cues for word boundaries, such as interword spaces. However, characters must be grouped into words or phrases for understanding, and the lack of interword spaces can cause certain ambiguity. In the current study, young and older Chinese adults' eye movements were recorded during their reading of naturally unspaced sentences, where consecutive words or nonwords were printed using alternating colors. The eye movements of both the Chinese young and older adults were clearly influenced by this explicit word boundary information. Across a number of eye-movement measures, in addition to a general age-related slowdown, the results showed that both groups benefited overall from the explicit color-based word boundary and experienced interference from the nonword boundary. Moreover, the manipulations showed stronger effects among the older adults. We discuss implications for practical application.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Tweedledum and Tweedledee of dynamic decisions: Discriminating between diffusion decision and accumulator models.","authors":"Peter D Kvam","doi":"10.3758/s13423-024-02587-0","DOIUrl":"https://doi.org/10.3758/s13423-024-02587-0","url":null,"abstract":"<p><p>Theories of dynamic decision-making are typically built on evidence accumulation, which is modeled using racing accumulators or diffusion models that track a shifting balance of support over time. However, these two types of models are only two special cases of a more general evidence accumulation process where options correspond to directions in an accumulation space. Using this generalized evidence accumulation approach as a starting point, I identify four ways to discriminate between absolute-evidence and relative-evidence models. First, an experimenter can look at the information that decision-makers considered to identify whether there is a filtering of near-zero evidence samples, which is characteristic of a relative-evidence decision rule (e.g., diffusion decision model). Second, an experimenter can disentangle different components of drift rates by manipulating the discriminability of the two response options relative to the stimulus to delineate the balance of evidence from the total amount of evidence. Third, a modeler can use machine learning to classify a set of data according to its generative model. Finally, machine learning can also be used to directly estimate the geometric relationships between choice options. I illustrate these different approaches by applying them to data from an orientation-discrimination task, showing converging conclusions across all four methods in favor of accumulator-based representations of evidence during choice. These tools can clearly delineate absolute-evidence and relative-evidence models, and should be useful for comparing many other types of decision theories.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information entropy facilitates (not impedes) lexical processing during language comprehension.","authors":"Hossein Karimi, Pete Weber, Jaden Zinn","doi":"10.3758/s13423-024-02463-x","DOIUrl":"10.3758/s13423-024-02463-x","url":null,"abstract":"<p><p>It is well known that contextual predictability facilitates word identification, but it is less clear whether the uncertainty associated with the current context (i.e., its lexical entropy) influences sentence processing. On the one hand, high entropy contexts may lead to interference due to greater number of lexical competitors. On the other hand, predicting multiple lexical competitors may facilitate processing through the preactivation of shared semantic features. In this study, we examined whether entropy measured at the trial level (i.e., for each participant, for each item) corresponds to facilitatory or inhibitory effects. Trial-level entropy captures each individual's knowledge about specific contexts and is therefore a more valid and sensitive measure of entropy (relative to the commonly employed item-level entropy). Participants (N = 112) completed two experimental sessions (with counterbalanced orders) that were separated by a 3- to 14-day interval. In one session, they produced up to 10 completions for sentence fragments (N = 647). In another session, they read the same sentences including a target word (whose entropy value was calculated based on the produced completions) while reading times were measured. We observed a facilitatory (not inhibitory) effect of trial-level entropy on lexical processing over and above item-level measures of lexical predictability (including cloze probability, surprisal, and semantic constraint). Extra analyses revealed that greater semantic overlap between the target and the produced responses facilitated target processing. Thus, the results lend support to theories of lexical prediction maintaining that prediction involves broad activation of semantic features rather than activation of full lexical forms.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472653/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139741836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distinct but related abilities for visual and haptic object recognition.","authors":"Jason K Chow, Thomas J Palmeri, Isabel Gauthier","doi":"10.3758/s13423-024-02471-x","DOIUrl":"10.3758/s13423-024-02471-x","url":null,"abstract":"<p><p>People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What, if anything, can be considered an amodal sensory dimension?","authors":"Charles Spence, Nicola Di Stefano","doi":"10.3758/s13423-023-02447-3","DOIUrl":"10.3758/s13423-023-02447-3","url":null,"abstract":"<p><p>The term 'amodal' is a key topic in several different research fields across experimental psychology and cognitive neuroscience, including in the areas of developmental and perception science. However, despite being regularly used in the literature, the term means something different to the researchers working in the different contexts. Many developmental scientists conceive of the term as referring to those perceptual qualities, such as, for example, the size and shape of an object, that can be picked up by multiple senses (e.g., vision and touch potentially providing information relevant to the same physical stimulus/property). However, the amodal label is also widely used in the case of those qualities that are not directly sensory, such as, for example, numerosity, rhythm, synchrony, etc. Cognitive neuroscientists, by contrast, tend to use the term amodal to refer to those central cognitive processes and brain areas that do not appear to be preferentially responsive to a particular sensory modality or to those symbolic or formal representations that essentially lack any modality and that are assumed to play a role in the higher processing of sensory information. Finally, perception scientists sometimes refer to the phenomenon of 'amodal completion', referring to the spontaneous completion of perceptual information that is missing when occluded objects are presented to observers. In this paper, we review the various different ways in which the term 'amodal' has been used in the literature and the evidence supporting the various uses of the term. Morever, we highlight some of the various properties that have been suggested to be 'amodal' over the years. Then, we try to address some of the questions that arise from the reviewed evidence, such as: Do different uses of the 'term' refer to different domains, for example, sensory information, perceptual processes, or perceptual representations? Are there any commonalities among the different uses of the term? To what extent is research on cross-modal associations (or correspondences) related to, or can shed light on, amodality? And how is the notion of amodal related to multisensory integration? Based on the reviewed evidence, it is argued that there is, as yet, no convincing empirical evidence to support the claim that amodal sensory qualities exist. We thus suggest that use of the term amodal would be more meaningful with respect to abstract cognition rather than necessarily sensory perception, the latter being more adequately explained/understood in terms of highly redundant cross-modal correspondences.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Separated hands further response-response binding effects.","authors":"Silvia Selimi, Christian Frings, Birte Moeller","doi":"10.3758/s13423-023-02419-7","DOIUrl":"10.3758/s13423-023-02419-7","url":null,"abstract":"<p><p>Action control is hierarchically organized. Multiple consecutive responses can be integrated into an event representation of higher order and can retrieve each other upon repetition, resulting in so-called response-response binding effects. Previous research indicates that the spatial separation of responses can affect how easily they can be cognitively separated. In this study, we introduced a barrier between the responding hands to investigate whether the spatial separation of two responses also influences response-response binding effects. In line with previous research on stimulus-response binding, we expected an increased separability of responses to result in stronger response-response binding effects when responding hands were separated by a barrier. We indeed found stronger response-response binding effects with separated hands. Results indicate that a more distinct representation of individual actions through increased separability might benefit the control of hierarchical actions.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face shape and motion are perceptually separable: Support for a revised model of face processing.","authors":"Emily Renae Martin, Jason S Hays, Fabian A Soto","doi":"10.3758/s13423-024-02470-y","DOIUrl":"10.3758/s13423-024-02470-y","url":null,"abstract":"<p><p>A recent model of face processing proposes that face shape and motion are processed in parallel brain pathways. Although tested in neuroimaging, the assumptions of this theory remain relatively untested through controlled psychophysical studies until now. Recruiting undergraduate students over the age of 18, we test this hypothesis using a tight control of stimulus factors, through computerized three-dimensional face models and calibration of dimensional discriminability, and of decisional factors, through a model-based analysis using general recognition theory (GRT). Theoretical links between neural and perceptual forms of independence within GRT allowed us to derive the a priori hypotheses that perceptual separability of shape and motion should hold, while other forms of independence defined within GRT might fail. We found evidence to support both of those predictions.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Brief category learning distorts perceptual space for complex scenes.","authors":"Gaeun Son, Dirk B Walther, Michael L Mack","doi":"10.3758/s13423-024-02484-6","DOIUrl":"10.3758/s13423-024-02484-6","url":null,"abstract":"<p><p>The formation of categories is known to distort perceptual space: representations are pushed away from category boundaries and pulled toward categorical prototypes. This phenomenon has been studied with artificially constructed objects, whose feature dimensions are easily defined and manipulated. How such category-induced perceptual distortions arise for complex, real-world scenes, however, remains largely unknown due to the technical challenge of measuring and controlling scene features. We address this question by generating realistic scene images from a high-dimensional continuous space using generative adversarial networks and using the images as stimuli in a novel learning task. Participants learned to categorize the scene images along arbitrary category boundaries and later reconstructed the same scenes from memory. Systematic biases in reconstruction errors closely tracked each participant's subjective category boundaries. These findings suggest that the perception of global scene properties is warped to align with a newly learned category structure after only a brief learning experience.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140028749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequency-tagging EEG reveals the effect of attentional focus on abstract magnitude processing.","authors":"Cathy Marlair, Aliette Lochy, Virginie Crollen","doi":"10.3758/s13423-024-02480-w","DOIUrl":"10.3758/s13423-024-02480-w","url":null,"abstract":"<p><p>While humans can readily access the common magnitude of various codes such as digits, number words, or dot sets, it remains unclear whether this process occurs automatically, or only when explicitly attending to magnitude information. We addressed this question by examining the neural distance effect, a robust marker of magnitude processing, with a frequency-tagging approach. Electrophysiological responses were recorded while participants viewed rapid sequences of a base numerosity presented at 6 Hz (e.g., \"2\") in randomly mixed codes: digits, number words, canonical dot, and finger configurations. A deviant numerosity either close (e.g., \"3\") or distant (e.g., \"8\") from the base was inserted every five items. Participants were instructed to focus their attention either on the magnitude number feature (from a previous study), the parity number feature, a nonnumerical color feature or no specific feature. In the four attentional conditions, we found clear discrimination responses of the deviant numerosity despite its code variation. Critically, the distance effect (larger responses when base/deviant are distant than close) was present when participants were explicitly attending to magnitude and parity, but it faded with color and simple viewing instructions. Taken together, these results suggest automatic access to an abstract number representation but highlight the role of selective attention in processing the underlying magnitude information. This study therefore provides insights into how attention can modulate the neural activity supporting abstract magnitude processing.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140102337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The influence of depth on object selection and manipulation in visual working memory within a 3D context.","authors":"Jiehui Qian, Bingxue Fu, Ziqi Gao, Bowen Tan","doi":"10.3758/s13423-024-02492-6","DOIUrl":"10.3758/s13423-024-02492-6","url":null,"abstract":"<p><p>Recent studies have examined whether the internal selection mechanism functions similarly for perception and visual working memory (VWM). However, the process of how we access and manipulate object representations distributed in a 3D space remains unclear. In this study, we utilized a memory search task to investigate the effect of depth on object selection and manipulation within VWM. The memory display consisted of colored items half positioned at the near depth plane and the other half at the far plane. During memory maintenance, the participants were instructed to search for a target representation and update its color. The results showed that under object-based attention (Experiments 1, 3, and 5), the update time was faster for targets at the near plane than for those at the far plane. This effect was absent in VWM when deploying spatial attention (Experiment 2) and in visual search regardless of the type of attention deployed (Experiment 4). The differential effects of depth on spatial and object-based attention in VWM suggest that spatial attention primarily relied on 2D location information irrespective of depth, whereas object-based attention seemed to prioritize memory representations at the front plane before shifting to the back. Our findings shed light on the interaction between depth perception and the selection mechanisms within VWM in a 3D context, emphasizing the importance of ordinal, rather than metric, spatial information in guiding object-based attention in VWM.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140194436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}