Ryan Hackländer, Pamela Baess, Christina Bermeitinger
{"title":"Smells like … no evidence that odors influence the attentional blink","authors":"Ryan Hackländer, Pamela Baess, Christina Bermeitinger","doi":"10.3758/s13414-024-02986-4","DOIUrl":"10.3758/s13414-024-02986-4","url":null,"abstract":"<div><p>The attentional blink (AB) paradigm is frequently used to investigate temporal attention. Essentially, rapid serial visual streams of several distractors and two targets are presented. The accuracy in detecting the second target stimulus (T2) decreases in the time window between 100 and 500 ms following accurate detection of the first target stimulus (T1). In two experiments, Colzato et al. <i>Attention, Perception, & Psychophysics</i>, <i>76</i>, 1510–1515, (2014) reported evidence for a modulation of the AB effect depending on the presentation of different ambient odors: Peppermint increased the AB compared with lavender. In the current study, we tried to replicate their basic findings while using different methods and procedures to present the lavender versus peppermint odorants. In three experiments, we found no evidence that these odorants influence the AB effect. We discuss our findings in comparison with those from Colzato et al., in relation to other empirical research in this field as well as in regard to different hypotheses concerning how odorants may influence human cognition.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"458 - 479"},"PeriodicalIF":1.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02986-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gustavo B. de Azevedo, André M. Cravo, Marc J. Buehner
{"title":"Temporal binding: Task-dependent variations and reliability across experimental paradigms","authors":"Gustavo B. de Azevedo, André M. Cravo, Marc J. Buehner","doi":"10.3758/s13414-024-02996-2","DOIUrl":"10.3758/s13414-024-02996-2","url":null,"abstract":"<div><p>Temporal binding refers to the subjective shortening of time between a cause and its effect compared with two unrelated events. The effect has been extensively explored over the past two decades and manifests across a robust range of paradigms, reflecting two distinct expressions of binding: (1) the subjective shortening of elapsed time between cause and effect and (2) the subjective attraction of cause and effect to each other. However, whether and how these binding expressions are related is still largely unknown. In this study, we report two experiments, employing four tasks (stimulus anticipation, Libet clock, interval estimation, and reproduction). We computed within and between session and task correlations across two (Experiment 1) and six (Experiment 2) sessions. Across both experiments, we successfully replicated temporal binding in temporal estimation, temporal reproduction, and the Libet clock, but not in stimulus anticipation. Good within-task and within-session reliability were observed, but reliability between sessions was poor. Correlation analyses revealed associations between binding effects measured via temporal estimation and temporal reproduction, underscoring task-dependent variations, in line with the suggestion that different temporal tasks tap into distinct facets of the temporal binding effect. This nuanced understanding contributes to refining experimental paradigms and advancing the comprehension of human temporal processing. The data, materials, and experiments from the present study are publicly available.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"650 - 669"},"PeriodicalIF":1.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Front-back asymmetries in endogenous auditory spatial attention","authors":"Andor L. Bodnár, Jeffrey R. Mock, Edward J. Golob","doi":"10.3758/s13414-024-02995-3","DOIUrl":"10.3758/s13414-024-02995-3","url":null,"abstract":"<div><p>Research on endogenous auditory spatial attention typically uses headphones or sounds in the frontal hemispace, which undersamples panoramic spatial hearing. Crossmodal attention studies also show that visual information impacts spatial hearing and attention. Given the overlap between vision and audition in frontal space, we tested the hypothesis that the distribution of endogenous auditory spatial attention would differ when attending to the front versus back hemispace. Participants performed a non-spatial discrimination task where most sounds were presented at a standard location, but occasionally shifted to other locations. Auditory spatial attention cueing and gradient effects across locations were measured in five experiments. Accuracy was greatest at standard versus shift locations, and was comparable when attending to the front or back. Reaction time measures of cueing and gradient effects were larger when attending to the front versus back midline, a finding that was evident when the range of spatial locations was 180° or 360°. When participants were blindfolded, the front/back differences were still present. Sound localization and divided attention tasks showed that the front/back differences were attentional, rather than perceptual, in nature. Collectively, the findings reveal that the impact of endogenous auditory spatial attention depends on where attention is being focused.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"511 - 530"},"PeriodicalIF":1.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142840236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pitch-induced illusory percepts of time","authors":"Jesse K. Pazdera, Laurel J. Trainor","doi":"10.3758/s13414-024-02982-8","DOIUrl":"10.3758/s13414-024-02982-8","url":null,"abstract":"<div><p>Past research suggests that pitch height can influence the perceived tempo of speech and music, such that higher-pitched signals seem faster than lower-pitched ones. However, previous studies have analyzed perceived tempo across a relatively limited range of fundamental frequencies. To investigate whether this higher-equals-faster illusion generalizes across the wider range of human hearing, we conducted a series of five experiments. We asked participants to compare the tempo of repeating tones from six different octaves and with 15 different interonset intervals to a metronomic standard tempo. In Experiments 1–3, we observed an inverted U-shaped effect of pitch on perceived tempo, with the perceived tempo of piano tones peaking between A4 (440 Hz) and A5 (880 Hz) and decreasing at lower and higher frequencies. This bias was consistent across base tempos and was only slightly attenuated by synchronous tapping with the repeating tones. Experiment 4 tested synthetic complex tones to verify that this nonlinearity generalizes beyond the piano timbre and that it was not related to the presence of low-frequency mechanical noise present in our piano tones. Experiment 5 revealed that the decrease in perceived tempo at extremely high octaves can be abolished by exposing participants to only high-pitched tones. Together, our results suggest that perceived tempo depends more on the relative pitch within a context than on absolute pitch and that tempo biases may invert or taper off beyond a two-octave range. We relate this context-dependence to human vocal ranges and propose that illusory tempo effects are strongest within pitch ranges consistent with human vocalization.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"545 - 564"},"PeriodicalIF":1.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02982-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"No evidence for a privileged role of global ensemble statistics in rapid scene perception: A registered replication attempt","authors":"Jiongtian Guo, Jay Pratt, Dirk B. Walther","doi":"10.3758/s13414-024-02994-4","DOIUrl":"10.3758/s13414-024-02994-4","url":null,"abstract":"<div><p>The nature of visual processes underlying scene perception remains a hotly debated topic. According to one view, scene and object perception rely on similar neural mechanisms, and their processing pathways are tightly interlinked. According to another, scene gist might follow a separate pathway, relying primarily on global image properties. Recently, this latter idea has been supported with a set of experiments using content priming as a probe into scene and object perception (Brady et al. <i>Journal of Experimental Psychology: Human Perception and Performance</i>, <i>43</i>, 1160–1176, 2017). The experiments have shown that preserving only structureless global ensemble texture information in the images of scenes could support rapid scene perception; however, preserving the same information in the images of objects failed to support object perception. We were intrigued by these results, since they are at odds with findings showing that scene content is primarily carried by the explicit encoding of scene structure as represented, for instance, by contours and their properties. In an attempt to reconcile these results, we attempted to replicate the experiments. In our replication experiment, we failed to find any evidence for a privileged use of texture information for scene as opposed to object primes. We conclude that there is no sufficient evidence for any fundamental differences in the processing pathways for object and scene perception: both rely on structural features that describe spatial relationships between constituent parts as well as texture information. To address this issue in the most rigorous manner possible, we here present the results of both a pilot experiment and a pre-registered replication attempt.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"685 - 697"},"PeriodicalIF":1.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Blindfold Test: Helping to decide whether an effect reflects visual processing or higher-level judgment","authors":"Benjamin F. van Buren, Brian J. Scholl","doi":"10.3758/s13414-024-02939-x","DOIUrl":"10.3758/s13414-024-02939-x","url":null,"abstract":"<div><p>Experimenters often ask subjects to rate displays in terms of high-level visual properties, such as animacy. When do such studies measure subjects’ visual impressions, and when do they merely reflect their judgments that certain features <i>should</i> indicate animacy? Here we introduce the ‘Blindfold Test’ for helping to evaluate the evidence for whether an effect reflects perception or judgment. If the same effect can be obtained not only with visual displays but also by simply <i>describing</i> those displays, then subjects’ responses may reflect higher-level reasoning rather than visual processing—and so other evidence is needed in order to support a ‘perceptual’ interpretation. We applied the Blindfold Test to three past studies in which observers made subjective reports about what they were seeing. In the first two examples, subjects rated stimuli in terms of high-level properties: animacy and physical forces. In both cases, the key findings replicated even when the visual stimuli were replaced with (mere) descriptions, and we conclude that these studies cannot by themselves license conclusions about perception. In contrast, a third example (involving motion-induced blindness) passed the test: subjects produced very different responses when given descriptions of the displays, compared to the visual stimuli themselves—providing compelling evidence that the original responses did not merely reflect such higher-level reasoning. The Blindfold Test may thus help to constrain interpretations of the mental processes underlying certain experimental results—especially for studies of properties that can be apprehended by both seeing and thinking.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"445 - 457"},"PeriodicalIF":1.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble representation of animacy could be based on mid-level visual features","authors":"Natalia A. Tiurina, Yuri A. Markov","doi":"10.3758/s13414-024-02976-6","DOIUrl":"10.3758/s13414-024-02976-6","url":null,"abstract":"<div><p>Studies suggest that mid-level features could underlie object animacy perception. In the current research, we tested whether ensemble animacy perception is based on high- or mid-level features. We used five types of images of animals and inanimate objects: color, grayscale, silhouettes, texforms – unrecognizable images that preserve mid-level texture and shape information – and scrambled images. In the series of Experiments 1, we asked participants to evaluate the animacy of single images and sets of eight images using a 10-point scale. In the series of Experiments 2, participants were shown two sets of eight images and had to choose a more animate one in the two-alternative forced-choice (2AFC) task. We found that in both paradigms, observers could report the mean animacy of the set of texform images without direct access to information about high-level features. Thus, ensemble animacy could be extracted only based on mid-level features such as shape and texture without access to more high-level information.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"415 - 430"},"PeriodicalIF":1.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal mechanisms underlying visual processing bias in peri-hand space","authors":"Ankit Maurya, Anuj Shukla, Tony Thomas","doi":"10.3758/s13414-024-02980-w","DOIUrl":"10.3758/s13414-024-02980-w","url":null,"abstract":"<div><p>The immediate space surrounding the hands has often been termed the peri-hand space (PHS), and is characterized by a smaller reaction time (RT), better detection, and enhanced accuracy for stimuli presented in this space, relative to those stimuli presented beyond this space. Such behavioral changes have been explained in terms of a biased allocation of cognitive resources such as perception, attention, and memory, for the efficient processing of information presented in the PHS. However, in two experiments, the current study shows that these cognitive biases seem to have an underlying temporal basis. The first experiment requires participants to perform a temporal bisection task, whereas the second experiment requires them to perform a verbal estimation task when stimuli are presented either near the hands or relatively far. Results from both experiments give evidence for slowing down of temporal mechanisms in the PHS – reflected in the form of temporal dilation for stimuli presented in the PHS relative to those presented further away. The slowing down of time in the PHS seems crucial in giving sufficient temporal allowance for the allocation of cognitive resources to prioritize the processing of information in the PHS. The findings are in line with the early anticipatory mechanisms associated with the PHS and seem to be driven by the switch/gate mechanism, and not the pacemaker component of the attentional gate model of time perception. Thus, the current study tries to integrate the theories of time perception with the peripersonal space literature.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 8","pages":"2659 - 2671"},"PeriodicalIF":1.7,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofia Tkhan Tin Le, Árni Kristjánsson, W. Joseph MacInnes
{"title":"Target selection during “snapshot” foraging","authors":"Sofia Tkhan Tin Le, Árni Kristjánsson, W. Joseph MacInnes","doi":"10.3758/s13414-024-02988-2","DOIUrl":"10.3758/s13414-024-02988-2","url":null,"abstract":"<div><p>While previous foraging studies have identified key variables that determine attentional selection, they are affected by the global statistics of the tasks. In most studies, targets are selected one at a time without replacement while distractor numbers remain constant, steadily reducing the ratios of targets to distractors with every selection. We designed a foraging task with a sequence of local “snapshots” of foraging displays, with each snapshot requiring a target selection. This enabled tighter control of local target and distractor type ratios while maintaining the flavor of a sequential, multiple-target foraging task. Observers saw only six items for each target selection during a “snapshot” containing varying numbers of two target types and two distractor types. After each selection, a new six-item array (the following snapshot) immediately appeared, centered on the locus of the last selected target. We contrasted feature-based and conjunction-based foraging and analyzed the data by the proportion of different target types in each trial. We found that target type proportion affected selection, with longer response times during conjunction foraging when the number of the alternate target types was greater than the repeated target types. In addition, the choice of target in each snapshot was influenced by the relative positions of selected targets and distractors during preceding snapshots. Importantly, this shows to what degree previous findings on foraging can be attributed to changing global statistics of the foraging array. We propose that “snapshot foraging” can increase experimental control in understanding how people choose targets during continuous attentional orienting.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 8","pages":"2778 - 2793"},"PeriodicalIF":1.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Milligan, Milca Jaime Brunet, Neslihan Caliskan, Elizabeth R. Schotter
{"title":"Parafoveal N400 effects reveal that word skipping is associated with deeper lexical processing in the presence of context-driven expectations","authors":"Sara Milligan, Milca Jaime Brunet, Neslihan Caliskan, Elizabeth R. Schotter","doi":"10.3758/s13414-024-02984-6","DOIUrl":"10.3758/s13414-024-02984-6","url":null,"abstract":"<div><p>Readers are able to begin processing upcoming words before directly fixating them, and in some cases skip words altogether (i.e., never fixated). However, the exact mechanisms and recognition thresholds underlying skipping decisions are not entirely clear. In the current study, we test whether skipping decisions reflect instances of more extensive lexical processing by recording neural language processing (via electroencephalography; EEG) and eye movements simultaneously, and we split trials based on target word-skipping behavior. To test lexical processing of the words, we manipulated the orthographic and phonological relationship between upcoming preview words and a semantically correct (and in some cases, expected) target word using the gaze-contingent display change paradigm. We also manipulated the constraint of the sentences to investigate the extent to which the identification of sublexical features of words depends on a reader’s expectations. We extracted fixation-related brain potentials (FRPs) during the fixation on the preceding word (i.e., in response to parafoveal viewing of the manipulated previews). We found that word skipping is associated with larger neural responses (i.e., N400 amplitudes) to semantically incongruous words that did not share a phonological representation with the correct word, and this effect was only observed in high-constraint sentences. These findings suggest that word skipping <i>can</i> be reflective of more extensive linguistic processing, but in the absence of expectations, word skipping may occur based on less fine-grained linguistic processing and be more reflective of identification of plausible or expected sublexical features rather than higher-level lexical processing (e.g., semantic access).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 1","pages":"76 - 93"},"PeriodicalIF":1.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}