VISUAL COGNITIONPub Date : 2023-10-16DOI: 10.1080/13506285.2023.2250505
Andrew Wildman, Richard Ramsey
{"title":"Investigating the automaticity of links between body perception and trait concepts","authors":"Andrew Wildman, Richard Ramsey","doi":"10.1080/13506285.2023.2250505","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250505","url":null,"abstract":"ABSTRACTSocial cognition has been argued to rely on automatic mechanisms, but little is known about how automatically the processing of body shapes is linked to other social processes, such as trait inference. In three pre-registered experiments, we tested the automaticity of links between body shape perception and trait inference by manipulating cognitive load during a response-competition task. In Experiment 1 (N = 52), participants categorised body shapes in the context of compatible or incompatible trait words, under high and low cognitive load. Bayesian multi-level modelling of reaction times indicated that interference caused by the compatibility of trait cues was insensitive to concurrent demands placed on working memory resources. These findings indicate that the linking of body shapes and traits is resource-light and more “automatic” in this sense. In Experiment 2 (N = 39) and 3 (N = 70), we asked participants to categorise trait words in the context of task-irrelevant body shapes. Under these conditions, no evidence of interference was found, regardless of concurrent load. These results suggest that while body shapes and trait concepts can be linked in an automatic manner, such processes are sensitive to wider contextual factors, such as the order in which information is presented.KEYWORDS: Social cognitionbody perceptionautomaticitytrait inferencecognitive load Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136113723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-10-02DOI: 10.1080/13506285.2023.2263204
Timothy L. Hubbard, Susan E. Ruppel
{"title":"Are attentional momentum and representational momentum related?","authors":"Timothy L. Hubbard, Susan E. Ruppel","doi":"10.1080/13506285.2023.2263204","DOIUrl":"https://doi.org/10.1080/13506285.2023.2263204","url":null,"abstract":"ABSTRACTIn attentional momentum, detection of a target further ahead in the direction of an ongoing attention shift is faster than detection of a target an equal distance in an orthogonal direction. In representational momentum, memory for the location of a previously viewed target is displaced in the direction of target motion. Hubbard [Hubbard, T. L. (2014). Forms of momentum across space: Representational, operational, and attentional. Psychonomic Bulletin & Review, 21(6), 1371–1403; Hubbard, T. L. (2015). The varieties of momentum-like experience. Psychological Bulletin, 141(6), 1081–1119] hypothesized that attentional momentum and representational momentum might be related or reflect the same mechanism or similar mechanisms. Two experiments collected measures of attentional momentum and representational momentum. In Experiment 1, attentional momentum based on differences between detecting targets opposite or orthogonal to a cued location was not correlated with representational momentum based on M displacement for the final location of a target. In Experiment 2, attentional momentum based on facilitation in detecting a gap on a probe presented in front of the final target location was not correlated with representational momentum based on a weighted mean of the probabilities of a same response in probe judgments of the final target location. Implications of the findings for the relationship of attentional momentum and representational momentum, and for theories of momentum-like effects in general, are considered.KEYWORDS: Attentional momentumrepresentational momentumdisplacementspatial representation AcknowledgementThe authors thank two anonymous reviewers for helpful comments on a previous version of the manuscript.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 Durations of the different stages of a trial differed slightly from those in Pratt et al. (Citation1999) to ensure that timing in the attentional momentum task was consistent with timing in the representational momentum task.2 Hubbard (Citation2019) suggested that an understanding of momentum-like processes needed to consider all of Marr’s (Citation1982) levels of analysis. Accordingly, although attentional momentum and representational momentum appear similar at the level of computational theory (i.e., both facilitate processing of spatial information expected to be present in the near future and both involve displacement across space, Hubbard, Citation2014, Citation2015), the current data suggest attentional momentum and representational momentum could be different at the level of representation and algorithm or the level of implementation (i.e., involve different mechanisms).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135895634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-09-05DOI: 10.1080/13506285.2023.2250506
R. Yu, Jiaying Zhao
{"title":"Serial and joint processing of conjunctive predictions","authors":"R. Yu, Jiaying Zhao","doi":"10.1080/13506285.2023.2250506","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250506","url":null,"abstract":"ABSTRACT When two jointly presented cues predict different outcomes, people respond faster to the conjunction/overlap of outcomes. Two explanations exist. In the joint account, people prioritize conjunction. In the serial account, people process cues serially and incidentally respond faster to conjunction. We tested these accounts in three experiments using novel web based attention-tracking tools. Participants learned colour-location associations where colorus predicted target locations (Experiment 1). Afterward, two cues appeared jointly and targets followed randomly. Exploratory data showed participants initially prioritized locations consistent with the conjunction, shifting later. Experiment 2 presented complex color-category associations during exposure. Upon seeing joint cues, participants' responses indicated both serial and joint processing. Experiment 3, with imperfect cue-outcome associations during exposure, surprisingly showed robust conjunctive predictions, likely because people expected exceptions to their predictions. Overall, strong learning led to spontaneous conjunctive predictions, but there were quick shifts to alternatives like serial processing when people were not expecting exceptions.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"1 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44417670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-08-30DOI: 10.1080/13506285.2023.2250530
C. R. McCormick, R. S. Redden, R. Klein
{"title":"How does exogenous alerting impact endogenous preparation on a temporal cueing task","authors":"C. R. McCormick, R. S. Redden, R. Klein","doi":"10.1080/13506285.2023.2250530","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250530","url":null,"abstract":"","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44806893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-08-30DOI: 10.1080/13506285.2023.2250507
R. Hessels, Martin K. Teunisse, D. Niehorster, M. Nyström, J. Benjamins, Atsushi Senju, Ignace T. C. Hooge
{"title":"Task-related gaze behaviour in face-to-face dyadic collaboration: Toward an interactive theory?","authors":"R. Hessels, Martin K. Teunisse, D. Niehorster, M. Nyström, J. Benjamins, Atsushi Senju, Ignace T. C. Hooge","doi":"10.1080/13506285.2023.2250507","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250507","url":null,"abstract":"ABSTRACT Visual routines theory posits that vision is critical for guiding sequential actions in the world. Most studies on the link between vision and sequential action have considered individual agents, while substantial human behaviour is characterized by multi-party interaction. Here, the actions of each person may affect what the other can subsequently do. We investigated task execution and gaze allocation of 19 dyads completing a Duplo-model copying task together, while wearing the Pupil Invisible eye tracker. We varied whether all blocks were visible to both participants, and whether verbal communication was allowed. For models in which not all blocks were visible, participants seemed to coordinate their gaze: The distance between the participants' gaze positions was smaller and dyads looked longer at the model concurrently than for models in which all blocks were visible. This was most pronounced when verbal communication was allowed. We conclude that the way the collaborative task was executed depended both on whether visual information was available to both persons, and how communication took place. Modelling task structure and gaze allocation for human-human and human-robot collaboration thus requires more than the observable behaviour of either individual. We discuss whether an interactive visual routines theory ought to be pursued.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46335916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-04-21DOI: 10.1080/13506285.2023.2250514
K. Ritchie, C. Cartledge, R. Kramer
{"title":"Investigating the other race effect: Human and computer face matching and similarity judgements","authors":"K. Ritchie, C. Cartledge, R. Kramer","doi":"10.1080/13506285.2023.2250514","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250514","url":null,"abstract":"ABSTRACT The other race effect (ORE) in part describes how people are poorer at identifying faces of other races compared to own-race faces. While well-established with face memory, more recent studies have begun to demonstrate its presence in face matching tasks, with minimal memory requirements. However, several of these studies failed to compare both races of faces and participants in order to fully test the predictions of the ORE. Here, we utilized images of both Black and White individuals, and Black and White participants, as well as tasks measuring perceptions of face matching and similarity. In addition, human judgements were directly compared with computer algorithms. First, we found only partial support for an ORE in face matching. Second, a deep convolutional neural network (residual network with 29 layers) performed exceptionally well with both races. The DCNN’s representations were strongly associated with human perceptions. Taken together, we found that the ORE was not robust or compelling in our human data, and was absent in the computer algorithms we tested. We discuss our results in the context of ORE literature, and the importance of state-of-the-art algorithms.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"21 1","pages":"314 - 325"},"PeriodicalIF":2.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59828009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-03-16DOI: 10.1080/13506285.2023.2224120
Ruth Laub, A. Münchau, C. Beste, C. Frings
{"title":"Too much information … The influence of target selection difficulty on binding processes","authors":"Ruth Laub, A. Münchau, C. Beste, C. Frings","doi":"10.1080/13506285.2023.2224120","DOIUrl":"https://doi.org/10.1080/13506285.2023.2224120","url":null,"abstract":"ABSTRACT The binding of stimuli and responses is an important mechanism in action control. Features of stimuli and responses are integrated into event files. A re-encounter with one or more of the stored features leads to automatic retrieval of the previous event file including the previously integrated response. The distractor-response binding effect evidenced that even irrelevant stimuli can be integrated with a response, subsequently trigger retrieval and thereby have an impact on behaviour. However, the type of distractor stimuli, the method of distractor presentation, and the display configuration largely differed in previous studies with regard to the target selection difficulty. In the present study, we thus varied the extent of target selection difficulty to investigate its role on the distractor-response binding effect. The results indicated that both processes, distractor-response binding and distractor-response retrieval are dependent on target selection difficulty. These results are discussed against recent theorizing in the BRAC framework (Frings, C., Hommel, B., Koch, I., Rothermund, K., Dignath, D., Giesen, C., Kiesel, A., Kunde, W., Mayr, S., Moeller, B., Möller, M., Pfister, R., & Philipp, A. (2020). Binding and Retrieval in Action Control (BRAC). Trends in Cognitive Sciences, 24(5), 375–387. https://doi.org/10.1016/j.tics.2020.02.004).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"216 - 234"},"PeriodicalIF":2.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49585077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-03-16DOI: 10.1080/13506285.2023.2250508
Kristin A. Bartlett, J. Camba
{"title":"Is this a real 3D shape? An investigation of construct validity and item difficulty in the PSVT:R","authors":"Kristin A. Bartlett, J. Camba","doi":"10.1080/13506285.2023.2250508","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250508","url":null,"abstract":"ABSTRACT The Purdue Test of Spatial Visualization (PSVT:R) is a widely used measure of spatial ability. Though the PSVT:R is considered to be a mental rotation test, degree of angular disparity between shapes does not correspond with degree of item difficulty. In the present study, we investigate the possibility that drawings that do not naturally look like 3D shapes could affect item difficulty in the Revised PSVT:R. We conducted a shape sorting task in which participants (N = 588) were asked whether shapes from the Revised PSVT:R looked like real 3D solid shapes or not. We also made a modified version of the Revised PSVT:R in which we changed which answer was correct and compared with performance on the Revised PSVT:R (N = 807). Some questions that should have been easier based on the degree of rotation instead became harder. Our results suggest that some of the isometric drawings in the PSVT:R may not clearly look like 3D shapes and that this fact may explain why item difficulty does not correspond to degree of rotation. Our findings raise questions about whether the PSVT:R can be considered to measure mental rotation as traditionally understood.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"235 - 255"},"PeriodicalIF":2.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42390836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-03-16DOI: 10.1080/13506285.2023.2221046
Yevhen Damanskyy
{"title":"Verbal instructions as selection bias that modulates visual selection","authors":"Yevhen Damanskyy","doi":"10.1080/13506285.2023.2221046","DOIUrl":"https://doi.org/10.1080/13506285.2023.2221046","url":null,"abstract":"ABSTRACT Research has shown that in addition to top-down and bottom-up processes, biases produced by the repetition priming effect and reward play a major role in visual selection. Action control research argues that bidirectional effect-response associations underlie the repetition priming effect and that such associations are also achievable through verbal instructions. This study evaluated whether verbally induced effect-response instructions bias visual selective attention in a visual search task in which these instructions were irrelevant. In two online experiments (Exp.1, N = 100; Exp. 2, N = 100), participants memorized specific verbal instructions before completing speeded visual-search classification tasks. In critical trials of the visual search task, a priming stimulus specified in the verbal instructions matched the target stimulus (positive priming). In addition, the design of Experiment 2 accounted for the repetition priming effect caused by frequent appearance of the target object. Reaction time analysis showed that verbal instructions inhibited visual search. Response error analysis showed that verbal effect-response formed an effect-response association between verbally specified stimulus and response. The results also showed that the target object’s frequent appearance strongly affected visual search. The overall findings showed that verbal instructions extended the list of selection biases that modulate visual selective attention.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"169 - 187"},"PeriodicalIF":2.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45147228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-03-16DOI: 10.1080/13506285.2023.2250528
Ksenia Gorbatova, G. Anufriev, E. Gorbunova
{"title":"Banner blindness as the suppression process: No perceptual load effect on web advertising detection","authors":"Ksenia Gorbatova, G. Anufriev, E. Gorbunova","doi":"10.1080/13506285.2023.2250528","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250528","url":null,"abstract":"ABSTRACT The study represents an application of perceptual load theory to the real-world internet users’ behavior and contributes to the dispute whether banner blindness – a tendency to ignore the banners on web pages – is a special case of inattentional blindness or a separate phenomenon. Perceptual load theory claims that processing of task-irrelevant information can be predicted by the level of perceptual load: the subjects in a high load condition are more likely to ignore the distractors, while with a low load, task-irrelevant information is processed. In four experiments, participants were divided into low and high load groups and asked to find items on a shopping website. In the critical trial, an advertising banner appeared. No significant effect of perceptual load on banner blindness was found. Banner blindness seems to be a result of attentional filters adjustment that adapts to the abundance of information on the web pages.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"256 - 276"},"PeriodicalIF":2.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48761603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}