VISUAL COGNITIONPub Date : 2023-11-20DOI: 10.1080/13506285.2023.2277475
Alice Nevard, Graham J. Hole, Jonathan E. Prunty, Markus Bindemann
{"title":"Understanding face detection with visual arrays and real-world scenes","authors":"Alice Nevard, Graham J. Hole, Jonathan E. Prunty, Markus Bindemann","doi":"10.1080/13506285.2023.2277475","DOIUrl":"https://doi.org/10.1080/13506285.2023.2277475","url":null,"abstract":"Face detection has been studied by presenting faces in blank displays, object arrays, and real-world scenes. This study investigated whether these display contexts differ in what they can reveal ab...","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"2 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-11-09DOI: 10.1080/13506285.2023.2279217
Danlei Chen, J. Benjamin Hutchinson
{"title":"When memory meets distraction: The role of unexpected stimulus-driven attentional capture on contextual cueing","authors":"Danlei Chen, J. Benjamin Hutchinson","doi":"10.1080/13506285.2023.2279217","DOIUrl":"https://doi.org/10.1080/13506285.2023.2279217","url":null,"abstract":"ABSTRACTVisuospatial attention plays a critical role in prioritizing behaviourally-relevant information and can be guided by task goals, stimulus salience, and memory. Here, we examined the interaction between memory-guided attention (contextual cueing) and stimulus-driven attention (unexpected colour singletons). In two visual search experiments with different set sizes, colour singletons were introduced unexpectedly in some trials after repeated configurations were used to establish contextual cueing. Reaction times were rapidly impacted by both contextual cueing and colour singletons, without significant interaction. However, introducing color singletons also impeded reaction times for novel configurations without color singletons, while repeated configurations were not impacted. These results suggest that on a trial level, contextual cueing and colour singleton effects are largely two independent factors driving selective attention, but there is evidence for a more general disruption of introducing distraction in cases where memory cannot be relied upon, suggesting a more complex interaction between attentional influences.KEYWORDS: Visual searchcontextual cueingpop-out effectepisodic memory AcknowledgmentsWe thank Emma Takizawa, Ramana Housman, and Sarah Zhang for participant recruitment and data collection.Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":" 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135243046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-10-31DOI: 10.1080/13506285.2023.2268382
Madeline Gedvila, Joan Danielle K. Ongchoco, Wilma A. Bainbridge
{"title":"Memorable beginnings, but forgettable endings: Intrinsic memorability alters our subjective experience of time","authors":"Madeline Gedvila, Joan Danielle K. Ongchoco, Wilma A. Bainbridge","doi":"10.1080/13506285.2023.2268382","DOIUrl":"https://doi.org/10.1080/13506285.2023.2268382","url":null,"abstract":"ABSTRACTTime is the fabric of experience – yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally important feature of subjective experience involves not just the objects of attention, but also what information will naturally be remembered or forgotten, independent of attention (i.e., intrinsic image memorability). Here we test how memorability influences time perception. Observers viewed scenes in an oddball paradigm, where the last scene could be a forgettable “oddball” amidst memorable ones, or vice versa. Subjective time dilation occurred only for forgettable oddballs, but not memorable ones – demonstrating an oddball effect where the oddball did not differ in low-level visual features, image category, or even subjective memorability. But more importantly, these results emphasize how memory can interact with temporal experience: memorable beginnings may put people in an efficient encoding state, which may in turn influence which moments are dilated in time.KEYWORDS: Time perceptiontime dilationoddball effectmemorabilityscene perception Disclosure statementNo potential conflict of interest was reported by the author(s).Author contributionsMG, JDKO, and WAB designed the research and wrote the manuscript. MG and JDKO conducted the experiments and analyzed the data with input from WAB.Open practicesAll data will be available in the Supplementary Raw Data Archive included with this submission, and via OSF: https://osf.io/dkxez/?view_only=38c7d6db309d49219360b21c41b431d2.Additional informationFundingMG was funded by the University of Chicago Metcalf Research Internship in Neuroscience. WAB is supported by the National Eye Institute (R01-EY034432). For helpful comments, we thank the members of the Brain Bridge Lab.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"8 32","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135813237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-10-16DOI: 10.1080/13506285.2023.2250505
Andrew Wildman, Richard Ramsey
{"title":"Investigating the automaticity of links between body perception and trait concepts","authors":"Andrew Wildman, Richard Ramsey","doi":"10.1080/13506285.2023.2250505","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250505","url":null,"abstract":"ABSTRACTSocial cognition has been argued to rely on automatic mechanisms, but little is known about how automatically the processing of body shapes is linked to other social processes, such as trait inference. In three pre-registered experiments, we tested the automaticity of links between body shape perception and trait inference by manipulating cognitive load during a response-competition task. In Experiment 1 (N = 52), participants categorised body shapes in the context of compatible or incompatible trait words, under high and low cognitive load. Bayesian multi-level modelling of reaction times indicated that interference caused by the compatibility of trait cues was insensitive to concurrent demands placed on working memory resources. These findings indicate that the linking of body shapes and traits is resource-light and more “automatic” in this sense. In Experiment 2 (N = 39) and 3 (N = 70), we asked participants to categorise trait words in the context of task-irrelevant body shapes. Under these conditions, no evidence of interference was found, regardless of concurrent load. These results suggest that while body shapes and trait concepts can be linked in an automatic manner, such processes are sensitive to wider contextual factors, such as the order in which information is presented.KEYWORDS: Social cognitionbody perceptionautomaticitytrait inferencecognitive load Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136113723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-10-02DOI: 10.1080/13506285.2023.2263204
Timothy L. Hubbard, Susan E. Ruppel
{"title":"Are attentional momentum and representational momentum related?","authors":"Timothy L. Hubbard, Susan E. Ruppel","doi":"10.1080/13506285.2023.2263204","DOIUrl":"https://doi.org/10.1080/13506285.2023.2263204","url":null,"abstract":"ABSTRACTIn attentional momentum, detection of a target further ahead in the direction of an ongoing attention shift is faster than detection of a target an equal distance in an orthogonal direction. In representational momentum, memory for the location of a previously viewed target is displaced in the direction of target motion. Hubbard [Hubbard, T. L. (2014). Forms of momentum across space: Representational, operational, and attentional. Psychonomic Bulletin & Review, 21(6), 1371–1403; Hubbard, T. L. (2015). The varieties of momentum-like experience. Psychological Bulletin, 141(6), 1081–1119] hypothesized that attentional momentum and representational momentum might be related or reflect the same mechanism or similar mechanisms. Two experiments collected measures of attentional momentum and representational momentum. In Experiment 1, attentional momentum based on differences between detecting targets opposite or orthogonal to a cued location was not correlated with representational momentum based on M displacement for the final location of a target. In Experiment 2, attentional momentum based on facilitation in detecting a gap on a probe presented in front of the final target location was not correlated with representational momentum based on a weighted mean of the probabilities of a same response in probe judgments of the final target location. Implications of the findings for the relationship of attentional momentum and representational momentum, and for theories of momentum-like effects in general, are considered.KEYWORDS: Attentional momentumrepresentational momentumdisplacementspatial representation AcknowledgementThe authors thank two anonymous reviewers for helpful comments on a previous version of the manuscript.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 Durations of the different stages of a trial differed slightly from those in Pratt et al. (Citation1999) to ensure that timing in the attentional momentum task was consistent with timing in the representational momentum task.2 Hubbard (Citation2019) suggested that an understanding of momentum-like processes needed to consider all of Marr’s (Citation1982) levels of analysis. Accordingly, although attentional momentum and representational momentum appear similar at the level of computational theory (i.e., both facilitate processing of spatial information expected to be present in the near future and both involve displacement across space, Hubbard, Citation2014, Citation2015), the current data suggest attentional momentum and representational momentum could be different at the level of representation and algorithm or the level of implementation (i.e., involve different mechanisms).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135895634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-09-05DOI: 10.1080/13506285.2023.2250506
R. Yu, Jiaying Zhao
{"title":"Serial and joint processing of conjunctive predictions","authors":"R. Yu, Jiaying Zhao","doi":"10.1080/13506285.2023.2250506","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250506","url":null,"abstract":"ABSTRACT When two jointly presented cues predict different outcomes, people respond faster to the conjunction/overlap of outcomes. Two explanations exist. In the joint account, people prioritize conjunction. In the serial account, people process cues serially and incidentally respond faster to conjunction. We tested these accounts in three experiments using novel web based attention-tracking tools. Participants learned colour-location associations where colorus predicted target locations (Experiment 1). Afterward, two cues appeared jointly and targets followed randomly. Exploratory data showed participants initially prioritized locations consistent with the conjunction, shifting later. Experiment 2 presented complex color-category associations during exposure. Upon seeing joint cues, participants' responses indicated both serial and joint processing. Experiment 3, with imperfect cue-outcome associations during exposure, surprisingly showed robust conjunctive predictions, likely because people expected exceptions to their predictions. Overall, strong learning led to spontaneous conjunctive predictions, but there were quick shifts to alternatives like serial processing when people were not expecting exceptions.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"1 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44417670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-08-30DOI: 10.1080/13506285.2023.2250530
C. R. McCormick, R. S. Redden, R. Klein
{"title":"How does exogenous alerting impact endogenous preparation on a temporal cueing task","authors":"C. R. McCormick, R. S. Redden, R. Klein","doi":"10.1080/13506285.2023.2250530","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250530","url":null,"abstract":"","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44806893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-08-30DOI: 10.1080/13506285.2023.2250507
R. Hessels, Martin K. Teunisse, D. Niehorster, M. Nyström, J. Benjamins, Atsushi Senju, Ignace T. C. Hooge
{"title":"Task-related gaze behaviour in face-to-face dyadic collaboration: Toward an interactive theory?","authors":"R. Hessels, Martin K. Teunisse, D. Niehorster, M. Nyström, J. Benjamins, Atsushi Senju, Ignace T. C. Hooge","doi":"10.1080/13506285.2023.2250507","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250507","url":null,"abstract":"ABSTRACT Visual routines theory posits that vision is critical for guiding sequential actions in the world. Most studies on the link between vision and sequential action have considered individual agents, while substantial human behaviour is characterized by multi-party interaction. Here, the actions of each person may affect what the other can subsequently do. We investigated task execution and gaze allocation of 19 dyads completing a Duplo-model copying task together, while wearing the Pupil Invisible eye tracker. We varied whether all blocks were visible to both participants, and whether verbal communication was allowed. For models in which not all blocks were visible, participants seemed to coordinate their gaze: The distance between the participants' gaze positions was smaller and dyads looked longer at the model concurrently than for models in which all blocks were visible. This was most pronounced when verbal communication was allowed. We conclude that the way the collaborative task was executed depended both on whether visual information was available to both persons, and how communication took place. Modelling task structure and gaze allocation for human-human and human-robot collaboration thus requires more than the observable behaviour of either individual. We discuss whether an interactive visual routines theory ought to be pursued.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46335916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-04-21DOI: 10.1080/13506285.2023.2250514
K. Ritchie, C. Cartledge, R. Kramer
{"title":"Investigating the other race effect: Human and computer face matching and similarity judgements","authors":"K. Ritchie, C. Cartledge, R. Kramer","doi":"10.1080/13506285.2023.2250514","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250514","url":null,"abstract":"ABSTRACT The other race effect (ORE) in part describes how people are poorer at identifying faces of other races compared to own-race faces. While well-established with face memory, more recent studies have begun to demonstrate its presence in face matching tasks, with minimal memory requirements. However, several of these studies failed to compare both races of faces and participants in order to fully test the predictions of the ORE. Here, we utilized images of both Black and White individuals, and Black and White participants, as well as tasks measuring perceptions of face matching and similarity. In addition, human judgements were directly compared with computer algorithms. First, we found only partial support for an ORE in face matching. Second, a deep convolutional neural network (residual network with 29 layers) performed exceptionally well with both races. The DCNN’s representations were strongly associated with human perceptions. Taken together, we found that the ORE was not robust or compelling in our human data, and was absent in the computer algorithms we tested. We discuss our results in the context of ORE literature, and the importance of state-of-the-art algorithms.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"21 1","pages":"314 - 325"},"PeriodicalIF":2.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59828009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VISUAL COGNITIONPub Date : 2023-03-16DOI: 10.1080/13506285.2023.2224120
Ruth Laub, A. Münchau, C. Beste, C. Frings
{"title":"Too much information … The influence of target selection difficulty on binding processes","authors":"Ruth Laub, A. Münchau, C. Beste, C. Frings","doi":"10.1080/13506285.2023.2224120","DOIUrl":"https://doi.org/10.1080/13506285.2023.2224120","url":null,"abstract":"ABSTRACT The binding of stimuli and responses is an important mechanism in action control. Features of stimuli and responses are integrated into event files. A re-encounter with one or more of the stored features leads to automatic retrieval of the previous event file including the previously integrated response. The distractor-response binding effect evidenced that even irrelevant stimuli can be integrated with a response, subsequently trigger retrieval and thereby have an impact on behaviour. However, the type of distractor stimuli, the method of distractor presentation, and the display configuration largely differed in previous studies with regard to the target selection difficulty. In the present study, we thus varied the extent of target selection difficulty to investigate its role on the distractor-response binding effect. The results indicated that both processes, distractor-response binding and distractor-response retrieval are dependent on target selection difficulty. These results are discussed against recent theorizing in the BRAC framework (Frings, C., Hommel, B., Koch, I., Rothermund, K., Dignath, D., Giesen, C., Kiesel, A., Kunde, W., Mayr, S., Moeller, B., Möller, M., Pfister, R., & Philipp, A. (2020). Binding and Retrieval in Action Control (BRAC). Trends in Cognitive Sciences, 24(5), 375–387. https://doi.org/10.1016/j.tics.2020.02.004).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":"31 1","pages":"216 - 234"},"PeriodicalIF":2.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49585077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}