{"title":"Amplitude envelope and subjective duration: Quantifying the role of decaying offsets in timing perception","authors":"Connor Wessel, Cindy Zhang, Michael Schutz","doi":"10.3758/s13414-025-03186-4","DOIUrl":"10.3758/s13414-025-03186-4","url":null,"abstract":"<div><p>Although duration perception is well-researched in the auditory literature, many experiments ostensibly exploring generalized processing use one type of tone—simplistic “beeps” with abrupt offsets. This leaves unaddressed the question of how we perceive duration when listening to the types of temporally complex sounds common in everyday listening. Here, we investigate the point of equivalence for the duration of steady state (aka “flat”) and more natural decaying (aka “percussive”) tones. Through this, we (1) gain further insight into amplitude envelope’s role in duration perception and (2) provide guidance useful for future studies moving beyond simplistic tones with flat amplitude envelopes. Specifically, we conduct a series of 2-alternative forced-choice adaptive staircase procedures across three experiments, with participants deciding which of two tones sound longer. Experiment 1 uses sounds matched in amplitude envelope (homogenous, <i>N</i> = 54), and Experiment 2 uses mismatched sounds (heterogenous, <i>N</i> = 55). In Experiment 3, participants completed both homogenous and heterogenous conditions across 10 sessions (<i>N</i> = 5). The heterogenous data illustrate a two-parameter linear equation (<span>(y=110+1.31x)</span>) best describes the point of subjective equality between flat and percussive tones, with model comparisons suggesting most unexplained variance can be attributed to individual differences. Together, these findings provide a useful step towards clarifying the perception of tones with amplitude envelopes more complex than those traditionally used in auditory perception studies, which holds important implications for both our theoretical understanding of perceived timing as well as ongoing applied work on improving hospital medical device sounds (which often use flat tones).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Determining the potential benefits of error feedback and metacognition on perceptual learning in the temporal and spatial domain","authors":"Jiaxuan Teng, Eve A. Isham","doi":"10.3758/s13414-025-03183-7","DOIUrl":"10.3758/s13414-025-03183-7","url":null,"abstract":"<div><p>Understanding time is crucial for our survival, influencing tasks that require coordination, alignment, and cognitive assessments. However, the process of learning and monitoring of temporal errors remains unclear. A subset of studies has shown that, unlike other modalities of magnitudes, perceptual learning in the temporal domain may not benefit from error feedback, suggesting that temporal perceptual learning may involve a distinct process that differs from other non-temporal information. We hypothesize this may be attributed to the concept of time being deeply and internally rooted within each organism and therefore may better benefit from an evaluation process that is done internally rather than from external feedback. To further investigate how we learn to time, the current study examines the learning rate, specificity, and transferability as a function of feedback method (explicit feedback and self-reflected metacognitive evaluation) during a temporal production task. The examination is also conducted in conjunction with a line production task to determine if the results diverge for temporal and spatial domains. Our results showed that spatial performance improved across all feedback conditions. However, improvements in temporal accuracy were slower and less pronounced regardless of feedback type. Further analysis revealed that participants were aware of the direction and magnitude of their errors, even when accuracy did not improve, highlighting a potential role for metacognitive insight that supports error monitoring and may aid learning transfer. These findings are discussed with respect to the cognitive mechanisms underlying temporal learning.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Top-down preparation contributes to intertrial priming in singleton search","authors":"Ben Sclodnick, Hong-Jin Sun, Bruce Milliken","doi":"10.3758/s13414-025-03169-5","DOIUrl":"10.3758/s13414-025-03169-5","url":null,"abstract":"<div><p>This study examined the influence of top-down preparation on singleton search performance. The method involved presentation of a single item that was unpredictably blue or orange, followed by a singleton search display that was unpredictably a blue target with orange distractors or vice versa. Preparation was instantiated by instructing participants to respond to the single item only if it was a particular colour (e.g., “respond only to blue single items”). The subsequent colour-singleton search target was either blue or orange. In a prior study with this method, participants prepared for the same single-item colour on all trials, and search performance was more than 200 ms faster when the prepared-for colour matched the colour singleton target than when it mismatched the colour singleton target (Sclodnick et al., <i>Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale</i>, <i>78</i>, 129–135, 2024). In the present study, Experiments 1, 2a/2b, and 3a/3b demonstrate that a similar but smaller magnitude effect occurs when preparation for a particular single item colour is cued randomly from trial to trial. Experiments 2a/2b demonstrate that this preparatory effect is sensitive to the temporal interval between single-item and search tasks, but only when preparation is cued on a trial-to-trial basis. Experiments 3a/3b demonstrate that this preparatory effect is reduced with increases in display size, but still robust with display sizes up to nine items. Together, the results demonstrate that memory representations that result from both a single instance of top-down preparatory control and multiple similar instances of top-down preparatory control can carry over to influence subsequent singleton search performance.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Ruthruff, Dominick A. Tolomeo, Sunil Jain, Kristina-Maria Reitan, Mei-Ching Lien
{"title":"Does the attentional window shed light on the attentional capture debate?","authors":"Eric Ruthruff, Dominick A. Tolomeo, Sunil Jain, Kristina-Maria Reitan, Mei-Ching Lien","doi":"10.3758/s13414-025-03174-8","DOIUrl":"10.3758/s13414-025-03174-8","url":null,"abstract":"<div><p>Belopolsky et al. (2007) provided evidence that capture occurs only when objects fall within the attentional window. This attentional window hypothesis was subsequently used to explain how salient stimuli can be powerful yet often have little or no observable effect. In the present study, we attempted to replicate their findings. Participants made a go/no-go decision based on the shape of the overall search array (diffuse attention) or based on the central fixation point (focused attention). Whereas Belopolsky et al. found larger capture effects from a color singleton distractor in the diffuse condition than the focused condition (where the color singleton is assumed to fall outside the attentional window), we found no such effect (Experiment 1). When we changed the task from a feature search task in Experiment 1 to a singleton search task in Experiment 2, capture effects increased overall but were once again similar for the diffuse and focused conditions. This pattern persisted even when we closely replicated Belopolsky et al.’s original design (Experiment 3). Our findings call into question the attentional window account and support an alternative account of why capture sometimes occurs: singleton search mode makes color singletons capture attention because participants are looking for singletons.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao-Lun Fu, Yu-Chin Chiu, Kanthika Latthirun, Cheng-Ta Yang
{"title":"Obligatory coactive processing of color and luminance challenges strategic modulation by predictiveness","authors":"Hao-Lun Fu, Yu-Chin Chiu, Kanthika Latthirun, Cheng-Ta Yang","doi":"10.3758/s13414-025-03166-8","DOIUrl":"10.3758/s13414-025-03166-8","url":null,"abstract":"<div><p>Navigating the world requires accurate categorization of objects around us, which often involves processing multiple sources of information. The predictiveness of a source plays an important role in accurate categorization. This study aims to investigate how the predictiveness of features modulates the processing strategies of two features that are generally considered more integral than separable: color and luminance. Participants categorized a set of visual stimuli, created by varying levels of color and luminance, into two categories defined by logical rules. The stimulus–category mapping was 100% in Experiment 1, but it was reduced to 95% in Experiment 2. In both experiments, the predictiveness of both features was equal. Lastly, in Experiment 3, we introduced unequal predictiveness such that color was more predictive for some participants, while luminance was more predictive for others. These manipulations were designed to test whether, as predicted by the strong version of the relative saliency hypothesis, even integral features such as color and luminance could be processed serially if one were made more predictive of the category. Across the three experiments, we employed both system factorial technology (SFT) and computational modeling to infer processing strategies in nonparametric and parametric manners, respectively. Although some variability existed at the individual subject level, both non-parametric and parametric modeling revealed robust evidence for coactive processing for the aggregated group data, regardless of the varied stimulus–category mapping and feature predictiveness. These findings suggest that the processing of color and luminance within an object involves obligatory coactive processing, thereby challenging the strategic adjustment relative saliency hypothesis.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the role of attentional effort in the efficacy of goal-setting in reducing attention lapses","authors":"Deanna L. Strayer, Nash Unsworth","doi":"10.3758/s13414-025-03200-9","DOIUrl":"10.3758/s13414-025-03200-9","url":null,"abstract":"<div><p>Attention lapses occur when focus shifts away from the task at hand towards internal or external distractions and can lead to failures in completing intended actions. Goal-setting theory proposes that setting specific, difficult goals leads to better task performance over vague goals. The present study examined whether goal setting increased attentional effort and reduced attention lapses during a four-choice reaction time task. The control condition received the vague goal: “respond as quickly as possible while keeping your accuracy above 95%.” The goal condition received specific goals that became progressively harder over time (450 ms, 400 ms, and 350 ms) with the same accuracy goal. Pupillary responses were recorded throughout and subjects answered randomly presented thought probes to determine whether they were experiencing task-unrelated thoughts (TUTs). The goal condition displayed larger preparatory and phasic pupil responses, suggesting more attentional effort was exerted during the task. In addition, the goal condition displayed fewer attention lapses both behaviorally and with TUTs. Further, several typical time-on-task effects were mitigated or eliminated (shown in behavioral, subjective, and physiological measures). The results reinforce previous findings that goal-setting techniques can reduce attention lapses and indicate attentional effort is a mechanism behind the efficacy of goal setting.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03200-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decoding child speech in silence and noise: The type of background noise shapes adults’ processing","authors":"Marzie Samimifar, Federica Bulgarelli","doi":"10.3758/s13414-025-03194-4","DOIUrl":"10.3758/s13414-025-03194-4","url":null,"abstract":"<div><p>Processing speech that is non-canonical (i.e., child-produced speech) and/or presented in background noise can pose challenges for listeners. We investigated how listening to child-produced speech affects young adults’ word recognition under varying noise conditions. Participants (<i>n</i> = 121) completed a two-picture eye-tracking task in one of three conditions: no background noise, pink background noise, and real-world background noise from LENA recordings. Participants heard a child or adult (Speaker-Age) direct attention to a generic (e.g., keys) or child-specific (e.g., potty; Item-Type) item. We examined the effect of Speaker-Age and Item-Type on participants’ looking time. In no background noise, increases in target looking were high, with greater increases when adults produced generic items. Both pink noise and real-world noise increased task difficulty, but patterns of results varied as a function of speaker gender. For female speech, background noise resulted in an effect of Speaker-Age, with participants increasing their looking time more for adult relative to child speech. The type of background noise did not influence this pattern. For male speech, there was an effect of Speaker-Age in the opposite direction, with participants increasing their looking time more for child relative to adult speech. For male speech, real-world background noise resulted in higher increases in target looking for child-specific items. Together, results suggest that child-produced speech may be more difficult to process than female-adult produced speech in noise, and that listeners can use background noise to predict who will speak and what they might speak about under more challenging conditions, such as processing male speech.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03194-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meike C. Kriegeskorte, Bettina Rolke, Elisabeth Hein
{"title":"Object correspondence in audition echoes vision: Not only spatiotemporal but also feature information influences auditory apparent motion","authors":"Meike C. Kriegeskorte, Bettina Rolke, Elisabeth Hein","doi":"10.3758/s13414-025-03175-7","DOIUrl":"10.3758/s13414-025-03175-7","url":null,"abstract":"<div><p>A crucial ability of our cognition is the perception of objects and their motions. We can perceive objects as moving by connecting them across space and time. This is possible even when the objects are not present continuously, as in the case of apparent motion displays like the Ternus display, consisting of two sets of stimuli, shifted to the left or right, separated by a variable inter-stimulus interval (ISI). This is an ambiguous display, which can be perceived as both stimuli moving uniformly to the right (group motion) or one stimulus moving across the stationary center stimulus (element motion), depending on which stimuli are connected over time. Which percept is seen can be influenced by the ISI and the stimulus features. Previous experiments have shown that the Ternus effect also exists in the auditory modality and that the auditory Ternus is also dependent on the ISI. This is a first indication that correspondence might work similarly in the visual and auditory modality. To test this idea further, we investigated whether the auditory Ternus effect is dependent on the stimulus features by creating a frequency-based bias using a high and a low sinewave tone as Ternus stimuli. This bias was compatible either with the element-motion or with the group-motion percept. Our results showed an influence of this feature bias in addition to an ISI effect, suggesting that the visual and the auditory modalities might both use the same mechanism to connect objects across space and time.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03175-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Holistic processing is robust in the face of task-context-induced spatial attention biases","authors":"Kim M. Curby, Sarah Lau, Chloe Pack","doi":"10.3758/s13414-025-03173-9","DOIUrl":"10.3758/s13414-025-03173-9","url":null,"abstract":"<div><p>One account of the characteristic holistic processing of faces and objects of expertise posits that it arises from a learned attention to the whole, rendering it difficult to attend only to parts of stimuli. We tested whether task-context-induced attentional biases for the top or bottom part of a stimulus alter holistic processing of faces. We induced attentional biases by manipulating the probability (75% or 25%) that the top or bottom part would be task-relevant in a modified composite part-judgement task. Manipulating the proportion of trials in which the top/bottom region was task-relevant (i.e., whether the top/bottom was cued) induced the expected attention bias, with increased sensitivity for the part more likely to be cued. Despite this, there was limited evidence of an impact on holistic face processing, with the probabilistic cueing manipulation failing to impact the congruency effect. In a second experiment, we investigated whether this finding extends to stimulus-driven holistic processing of line patterns rich in Gestalt cues. Here, the only evidence of an impact on holistic processing was the attenuation of a greater congruency effect for bottom, over top, judgements in the bottom-bias condition. However, this was primarily the result of a reduction in a general bias to process the top region, present for face and non-face stimuli, rather than a direct impact on holistic processing. Thus, holistic processing for both stimulus types was relatively robust to the influence of task context-based attentional biases. However, there was some evidence of greater flexibility in stimulus-driven, compared to more experience-driven, processing more generally.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nils Kloeckner, Ronja Mueller, Marie Buerling, Claus-Christian Carbon, Tilo Strobach
{"title":"Face adaptation: Investigating non-configural contrast alterations","authors":"Nils Kloeckner, Ronja Mueller, Marie Buerling, Claus-Christian Carbon, Tilo Strobach","doi":"10.3758/s13414-025-03157-9","DOIUrl":"10.3758/s13414-025-03157-9","url":null,"abstract":"<div><p>The process of adapting facial representations plays a critical role in face perception and memory, representing an interplay of bottom-up and top-down mechanisms. This process allows individuals to recognize faces despite dynamic changes, for example, aging. However, a full understanding of the adaptation characteristics of non-configural facial information is still lacking in the face-processing literature. The present study investigates face aftereffects in response to facial contrast information, extending the research beyond recent studies on adaptation regarding brightness and color saturation information to a new non-configural facial information type. The research involved four experiments using celebrity face images manipulated for facial contrast, with intervals ranging from 300 ms (Experiment 1) to 5 min (Experiment 2) between adaptation and test phases. Experiment 3 used inverted adaptation faces to investigate whether adaptation effects transfer to upright test faces. The results demonstrate adaptation effects for facial contrast that are robust over time and do not transfer from inverted to upright faces. In addition, these effect sizes were compared to those of brightness and saturation information (Experiment 4), revealing no significant differences in magnitude. In general, the present findings suggest that non-configural facial contrast information is an integral part of face representations, representing an interplay of bottom-up and top-down mechanisms in face processing. All data are available on the Open Science Framework.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03157-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}