Courtney Guida, Minwoo J. B. Kim, Olivia A. Stibolt, Alyssa Lompado, James E. Hoffman
{"title":"Correction to: The N400 component reflecting semantic and repetition priming of visual scenes is suppressed during the attentional blink","authors":"Courtney Guida, Minwoo J. B. Kim, Olivia A. Stibolt, Alyssa Lompado, James E. Hoffman","doi":"10.3758/s13414-024-03007-0","DOIUrl":"10.3758/s13414-024-03007-0","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"713 - 713"},"PeriodicalIF":1.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-03007-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of dynamic shape cues in the recognition of emotion from naturalistic body motion","authors":"Erika Ikeda, Nathan Destler, Jacob Feldman","doi":"10.3758/s13414-024-02990-8","DOIUrl":"10.3758/s13414-024-02990-8","url":null,"abstract":"<div><p>Human observers can often judge emotional or affective states from bodily motion, even in the absence of facial information, but the mechanisms underlying this inference are not completely understood. Important clues come from the literature on “biological motion” using point-light displays (PLDs), which convey human action, and possibly emotion, apparently on the basis of body movements alone. However, most studies have used simplified and often exaggerated displays chosen to convey emotions as clearly as possible. In the current study we aim to study emotion interpretation using more naturalistic stimuli, which we draw from narrative films, security footage, and other sources not created for experimental purposes. We use modern algorithmic methods to extract joint positions, from which we create three display types intended to probe the nature of the cues observers use to interpret emotions: PLDs; stick figures, which convey “skeletal” information more overtly; and a control condition in which joint positions are connected in an anatomically incorrect manner. The videos depicted a range of emotions, including <i>fear, joy, nurturing, anger, sadness</i>, and <i>determination</i>. Subjects were able to estimate the depicted emotion with a high degree of reliability and accuracy, most effectively from stick figures, somewhat less so for PLDs, and least for the control condition. These results confirm that people can interpret emotion from naturalistic body movements alone, and suggest that the mechanisms underlying this interpretation rely heavily on skeletal representations of dynamic shape.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"604 - 618"},"PeriodicalIF":1.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah Cormier, Christine D. Tsang, Stephen C. Van Hedger
{"title":"The role of attention in eliciting a musically induced visual motion aftereffect","authors":"Hannah Cormier, Christine D. Tsang, Stephen C. Van Hedger","doi":"10.3758/s13414-024-02985-5","DOIUrl":"10.3758/s13414-024-02985-5","url":null,"abstract":"<div><p>Previous studies have reported visual motion aftereffects (MAEs) following prolonged exposure to auditory stimuli depicting motion, such as ascending or descending musical scales. The role of attention in modulating these cross-modal MAEs, however, remains unclear. The present study manipulated the level of attention directed to musical scales depicting motion and assessed subsequent changes in MAE strength. In Experiment 1, participants either responded to an occasional secondary auditory stimulus presented concurrently with the musical scales (diverted-attention condition) or focused on the scales (control condition). In Experiment 2 we increased the attentional load of the task by having participants perform an auditory 1-back task in one ear, while the musical scales were played in the other. Visual motion perception in both experiments was assessed via random dot kinematograms (RDKs) varying in motion coherence. Results from Experiment 1 replicated prior work, in that extended listening to ascending scales resulted in a greater likelihood of judging RDK motion as descending, in line with the MAE. In contrast, the MAE was eliminated in Experiment 2. These results were internally replicated using an in-lab, within-participant design (Experiment 3). These results suggest that attention is necessary in eliciting an auditory-induced visual MAE.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"480 - 497"},"PeriodicalIF":1.7,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheolhwan Kim, Nahyeon Lee, Koeun Jung, Suk Won Han
{"title":"The degree of parallel/serial processing affects stimulus-driven and memory-driven attentional capture: Evidence for the attentional window account","authors":"Cheolhwan Kim, Nahyeon Lee, Koeun Jung, Suk Won Han","doi":"10.3758/s13414-024-03003-4","DOIUrl":"10.3758/s13414-024-03003-4","url":null,"abstract":"<div><p>The issue of whether a salient stimulus in the visual field captures attention in a stimulus-driven manner has been debated for several decades. The attentional window account proposed to resolve this issue by claiming that a salient stimulus captures attention and interferes with target processing only when an attentional window is set wide enough to encompass both the target and the salient distractor. By contrast, when a small attentional window is serially shifted among individual stimuli to find a target, no capture is found. Research findings both support and challenge this attentional window account. However, in these studies, the attentional window size was improperly estimated, necessitating a re-evaluation of the account. Here, using a recently developed visual search paradigm, we investigated whether visual stimuli were processed in a parallel or a serial manner. We found significant attentional capture when multiple stimuli were processed in parallel within a large attentional window. By contrast, when a small window had to be serially shifted, no capture was found. We conclude that the attentional window account can be a useful framework to resolve the widespread debate regarding stimulus-driven attentional capture.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"384 - 398"},"PeriodicalIF":1.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tina M. Grieco-Calub, Yousaf Ilyas, Kristina M. Ward, Alex E. Clain, Janet Olson
{"title":"Effect of hearing experience on preschool-aged children’s eye gaze to a talker during spoken language processing","authors":"Tina M. Grieco-Calub, Yousaf Ilyas, Kristina M. Ward, Alex E. Clain, Janet Olson","doi":"10.3758/s13414-024-03001-6","DOIUrl":"10.3758/s13414-024-03001-6","url":null,"abstract":"<div><p>Speechreading—gathering speech information from talkers’ faces—supports speech perception when speech acoustics are degraded. Benefitting from speechreading, however, requires listeners to visually fixate talkers during face-to-face interactions. The purpose of this study is to test the hypothesis that preschool-aged children allocate their eye gaze to a talker when speech acoustics are degraded. We implemented a looking-while-listening paradigm to quantify children’s eye gaze to an unfamiliar female talker and two images of familiar objects presented on a screen while the children listened to speech. We tested 31 children (12 girls), ages 26–48 months, who had normal hearing (NH group, <i>n</i> = 19) or bilateral sensorineural hearing loss and used hearing devices (D/HH group, <i>n</i> = 12). Children’s eye gaze was video-recorded as the talker verbally labeled one of the images, either in quiet or in the presence of an unfamiliar two-talker male speech masker. Children’s eye gaze to the target image, distractor image, and female talker was coded every 33 ms off-line by trained observers. Bootstrapped differences of time series (BDOTS) analyses and ternary plots were used to determine differences in visual fixations of the talker between listening conditions in the NH and D/HH groups. Results suggest that the NH group visually fixated the talker more in the masker condition than in quiet. We did not observe statistically discernable differences in visual fixations of the talker between the listening conditions for the D/HH group. Gaze patterns of the NH group in the masker condition looked like gaze patterns of the D/HH group.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"531 - 544"},"PeriodicalIF":1.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deceptive illusory cues can influence orthogonally directed manual length estimations","authors":"Shijun Yan, Jan M. Hondzinski","doi":"10.3758/s13414-024-02991-7","DOIUrl":"10.3758/s13414-024-02991-7","url":null,"abstract":"<div><p>We examined participants’ abilities to manually estimate one of two perpendicular line segment lengths using curved point-to-point movements. Configurations involved symmetrical, unsymmetrical, and no bisection in upright and rotated orientation alterations to vertical-horizontal (V-H) illusions, where people often perceive longer vertical than horizontal segments for equal segment lengths. Participants used two orthogonally directed movements for length estimations: positively proportional (POS) – where greater fingertip displacement involved longer length estimation between configuration intersection start position and fingertip end, and negatively proportional (NEG) – where greater fingertip displacement from the screen edge start position toward configuration intersection involved a shorter length estimation between configuration intersection and fingertip end. Length estimations followed most standard perceptual aspects of the V-H illusion for POS estimations, yet differed between upright and rotated orientations for the symmetrical configuration. NEG estimations revealed no illusory influences. Use of allocentric programming likely accompanied POS estimations to explain V-H illusory influences on perceptuomotor control.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"588 - 603"},"PeriodicalIF":1.7,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02991-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dwight J. Peterson, Filiz Gözenman, Hector Arciniega, Marian E. Berryhill
{"title":"Correction to: Contralateral delay activity tracks the influence of Gestalt grouping principles on active visual working memory representations","authors":"Dwight J. Peterson, Filiz Gözenman, Hector Arciniega, Marian E. Berryhill","doi":"10.3758/s13414-024-02999-z","DOIUrl":"10.3758/s13414-024-02999-z","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"714 - 718"},"PeriodicalIF":1.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yelda Semizer, Dian Yu, Qianqian Wan, Benjamin Balas, Ruth Rosenholtz
{"title":"Effects of maze appearance on maze solving","authors":"Yelda Semizer, Dian Yu, Qianqian Wan, Benjamin Balas, Ruth Rosenholtz","doi":"10.3758/s13414-024-03000-7","DOIUrl":"10.3758/s13414-024-03000-7","url":null,"abstract":"<div><p>As mazes are typically complex, cluttered stimuli, solving them is likely limited by visual crowding. Thus, several aspects of the appearance of the maze – the thickness, spacing, and curvature of the paths, as well as the texture of both paths and walls – likely influence the performance. In the current study, we investigate the effects of perceptual aspects of maze design on maze-solving performance to understand the role of crowding and visual complexity. We conducted two experiments using a set of controlled stimuli to examine the effects of path and wall thickness, as well as the style of rendering used for both paths and walls. Experiment 1 finds that maze-solving time increases with thicker paths (thus thinner walls). Experiment 2 replicates this finding while also showing that maze-solving time increases when mazes have wavy walls, which are likely more crowded, rather than straight walls. Our findings imply a role of both crowding and figure/ground segmentation in mental maze solving and suggest reformulating the growth cone models.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"637 - 649"},"PeriodicalIF":1.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-03000-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextual cues can be used to predict the likelihood of and reduce interference from salient distractors","authors":"Jeff Moher, Andrew B. Leber","doi":"10.3758/s13414-024-03004-3","DOIUrl":"10.3758/s13414-024-03004-3","url":null,"abstract":"<div><p>Our attention can sometimes be disrupted by salient but irrelevant objects in the environment. This distractor interference can be reduced when distractors appear frequently, allowing us to anticipate their presence. However, it remains unknown whether distractor frequency can be learned implicitly across distinct contexts. In other words, can we implicitly learn that in certain situations a distractor is more likely to appear, and use that knowledge to minimize the impact that the distractor has on our behavior? In two experiments, we explored this question by asking participants to find a unique shape target in displays that could contain a color singleton distractor. Forest or city backgrounds were presented on each trial, and unbeknownst to the participants, each image category was associated with a different distractor probability. We found that distractor interference was reduced when the image predicted a high rather than low probability of distractor presence on the upcoming trial, even though the location and (in Experiment 2) the color of the distractor was completely unpredictable. These effects appear to be driven by implicit rather explicit learning. We conclude that implicit learning of context-specific distractor probabilities can drive flexible strategies for the reduction of distractor interference.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"303 - 315"},"PeriodicalIF":1.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-03004-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual averaging on relevant and irrelevant featural dimensions","authors":"Philip T. Quinlan, Dale J. Cohen, Keith Allen","doi":"10.3758/s13414-024-03005-2","DOIUrl":"10.3758/s13414-024-03005-2","url":null,"abstract":"<div><p>Here we report four experiments that explore the nature of perceptual averaging. We examine the evidence that participants recover and store a representation of the mean value of a set of perceptual features that are distributed across the optic array. The extant evidence shows that participants are particularly accurate in estimating the relevant mean value, but we ask whether this might be due to processes that reflect assessing featural similarity rather than computing an average. We set out and test detailed predictions that can be used to adjudicate between these averaging and similarity hypotheses. In each experiment, a memory display of randomly positioned bars was briefly presented followed immediately by a probe bar. Participants had to report in a Yes/No task whether the probed feature value was present. In initial experiments, we examine reports of the orientation of white bars and of the color of vertical bars. Then, in companion experiments, we examine reports of the orientation of bars whose color vary, and of the color of bars whose orientation varies. In this way, we test ideas about whether perceptual averaging occurs on a featural dimension that is irrelevant to the task. Currently, it is not known whether perceptual averaging only takes place on a task-relevant dimension or whether it operates more widely.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"698 - 711"},"PeriodicalIF":1.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-03005-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}