Samantha A Montoya, Carter B Mulder, Karly D Allison, Michael S Lee, Stephen A Engel, Michael-Paul Schallmo
{"title":"What does visual snow look like? Quantification by matching a simulation.","authors":"Samantha A Montoya, Carter B Mulder, Karly D Allison, Michael S Lee, Stephen A Engel, Michael-Paul Schallmo","doi":"10.1167/jov.24.6.3","DOIUrl":"10.1167/jov.24.6.3","url":null,"abstract":"<p><p>The primary symptom of visual snow syndrome (VSS) is the unremitting perception of small, flickering dots covering the visual field. VSS is a serious but poorly understood condition that can interfere with daily tasks. Several studies have provided qualitative data about the appearance of visual snow, but methods to quantify the symptom are lacking. Here, we developed a task in which participants with VSS adjusted parameters of simulated visual snow on a computer monitor until the simulation matched their internal visual snow. On each trial, participants (n = 31 with VSS) modified the size, density, update speed, and contrast of the simulation. Participants' settings were highly reliable across trials (intraclass correlation coefficients > 0.89), and they reported that the task was effective at stimulating their visual snow. On average, visual snow was very small (less than 2 arcmin in diameter), updated quickly (mean temporal frequency = 18.2 Hz), had low density (mean snow elements vs. background = 2.87%), and had low contrast (average root mean square contrast = 2.56%). Our task provided a quantitative assessment of visual snow percepts, which may help individuals with VSS communicate their experience to others, facilitate assessment of treatment efficacy, and further our understanding of the trajectory of symptoms, as well as the neural origins of VSS.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"3"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11160957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141248702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Walper, Alexandra Bendixen, Sabine Grimm, Anna Schubö, Wolfgang Einhäuser
{"title":"Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component.","authors":"Daniel Walper, Alexandra Bendixen, Sabine Grimm, Anna Schubö, Wolfgang Einhäuser","doi":"10.1167/jov.24.6.7","DOIUrl":"10.1167/jov.24.6.7","url":null,"abstract":"<p><p>Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included \"split scenes\" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"7"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11166226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrections to: Development of radial frequency pattern perception in macaque monkeys.","authors":"","doi":"10.1167/jov.24.6.18","DOIUrl":"10.1167/jov.24.6.18","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"18"},"PeriodicalIF":2.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11216250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Studies on temperature impact (sudden and gradual) of the white-leg shrimp Litopenaeus vannamei.","authors":"Vinu Dayalan, Govindaraju Kasivelu, Vasantharaja Raguraman, Amreen Nisa Sharma","doi":"10.1007/s11356-022-20963-y","DOIUrl":"10.1007/s11356-022-20963-y","url":null,"abstract":"<p><p>In the present study, the effect of temperature shock (sudden and gradual) by increasing water temperature from 28 °C to 40 °C on survival, behavioral responses and immunological changes in Litopenaeus vannamei (L. vannamei) was studied. In sudden temperature shock, experimental groups were maintained at different temperature ranges such as 28 °C- 31 °C; 28 °C-34 °C; 28 °C-37 °C and 28 °C-40 °C along with 28 °C as control. For gradual temperature shock experiments, the initial water temperature was maintained at 28 °C for 24 h in control and then increased to 1 °C for every 24 h until reaching 40 °C. On reaching the final temperature of 40 °C, it was kept stable for 120 h. Results indicated that the increasing water temperature (sudden shock) affected survival, behavioral responses and immunological parameter. No shrimp survived at 40 °C treatment (sudden), whereas in the gradual temperature shock experiment 20% of animals survived at 40 °C. The increasing water temperature had no effects on behavioral responses up to 37 °C (gradual), but at 40 °C the observation of muscle cramps, low swimming rate, no feeding, muscle and hepatopancreas color turned whitish. Overall, the results suggest that L. vannamei can tolerate water temperature up to 34 °C under sudden shock and 37 °C under gradual shock conditions. This study reveals that shrimp L. vannamei can self-regulate to a certain extent of temperature variation in the environment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"16 1","pages":"38743-38750"},"PeriodicalIF":5.8,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87021591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continuous psychophysics shows millisecond-scale visual processing delays are faithfully preserved in movement dynamics.","authors":"Johannes Burge, Lawrence K Cormack","doi":"10.1167/jov.24.5.4","DOIUrl":"10.1167/jov.24.5.4","url":null,"abstract":"<p><p>Image differences between the eyes can cause interocular discrepancies in the speed of visual processing. Millisecond-scale differences in visual processing speed can cause dramatic misperceptions of the depth and three-dimensional direction of moving objects. Here, we develop a monocular and binocular continuous target-tracking psychophysics paradigm that can quantify such tiny differences in visual processing speed. Human observers continuously tracked a target undergoing Brownian motion with a range of luminance levels in each eye. Suitable analyses recover the time course of the visuomotor response in each condition, the dependence of visual processing speed on luminance level, and the temporal evolution of processing differences between the eyes. Importantly, using a direct within-observer comparison, we show that continuous target-tracking and traditional forced-choice psychophysical methods provide estimates of interocular delays that agree on average to within a fraction of a millisecond. Thus, visual processing delays are preserved in the movement dynamics of the hand. Finally, we show analytically, and partially confirm experimentally, that differences between the temporal impulse response functions in the two eyes predict how lateral target motion causes misperceptions of motion in depth and associated tracking responses. Because continuous target tracking can accurately recover millisecond-scale differences in visual processing speed and has multiple advantages over traditional psychophysics, it should facilitate the study of temporal processing in the future.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 5","pages":"4"},"PeriodicalIF":1.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11094763/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jirí Filip, Jirí Lukavský, Filip Dechterenko, Filipp Schmidt, Roland W Fleming
{"title":"Perceptual dimensions of wood materials.","authors":"Jirí Filip, Jirí Lukavský, Filip Dechterenko, Filipp Schmidt, Roland W Fleming","doi":"10.1167/jov.24.5.12","DOIUrl":"10.1167/jov.24.5.12","url":null,"abstract":"<p><p>Materials exhibit an extraordinary range of visual appearances. Characterizing and quantifying appearance is important not only for basic research on perceptual mechanisms but also for computer graphics and a wide range of industrial applications. Although methods exist for capturing and representing the optical properties of materials and how they vary across surfaces (Haindl & Filip, 2013), the representations are typically very high-dimensional, and how these representations relate to subjective perceptual impressions of material appearance remains poorly understood. Here, we used a data-driven approach to characterizing the perceived appearance characteristics of 30 samples of wood veneer using a \"visual fingerprint\" that describes each sample as a multidimensional feature vector, with each dimension capturing a different aspect of the appearance. Fifty-six crowd-sourced participants viewed triplets of movies depicting different wood samples as the sample rotated. Their task was to report which of the two match samples was subjectively most similar to the test sample. In another online experiment, 45 participants rated 10 wood-related appearance characteristics for each of the samples. The results reveal a consistent embedding of the samples across both experiments and a set of nine perceptual dimensions capturing aspects including the roughness, directionality, and spatial scale of the surface patterns. We also showed that a weighted linear combination of 11 image statistics, inspired by the rating characteristics, predicts perceptual dimensions well.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 5","pages":"12"},"PeriodicalIF":1.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11129719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yukai Zhao, Jiajuan Liu, Barbara Anne Dosher, Zhong-Lin Lu
{"title":"Enabling identification of component processes in perceptual learning with nonparametric hierarchical Bayesian modeling.","authors":"Yukai Zhao, Jiajuan Liu, Barbara Anne Dosher, Zhong-Lin Lu","doi":"10.1167/jov.24.5.8","DOIUrl":"10.1167/jov.24.5.8","url":null,"abstract":"<p><p>Perceptual learning is a multifaceted process, encompassing general learning, between-session forgetting or consolidation, and within-session fast relearning and deterioration. The learning curve constructed from threshold estimates in blocks or sessions, based on tens or hundreds of trials, may obscure component processes; high temporal resolution is necessary. We developed two nonparametric inference procedures: a Bayesian inference procedure (BIP) to estimate the posterior distribution of contrast threshold in each learning block for each learner independently and a hierarchical Bayesian model (HBM) that computes the joint posterior distribution of contrast threshold across all learning blocks at the population, subject, and test levels via the covariance of contrast thresholds across blocks. We applied the procedures to the data from two studies that investigated the interaction between feedback and training accuracy in Gabor orientation identification over 1920 trials across six sessions and estimated learning curve with block sizes L = 10, 20, 40, 80, 160, and 320 trials. The HBM generated significantly better fits to the data, smaller standard deviations, and more precise estimates, compared to the BIP across all block sizes. In addition, the HBM generated unbiased estimates, whereas the BIP only generated unbiased estimates with large block sizes but exhibited increased bias with small block sizes. With L = 10, 20, and 40, we were able to consistently identify general learning, between-session forgetting, and rapid relearning and adaptation within sessions. The nonparametric HBM provides a general framework for fine-grained assessment of the learning curve and enables identification of component processes in perceptual learning.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 5","pages":"8"},"PeriodicalIF":1.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11131338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141081641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature-invariant processing of spatial segregation based on temporal asynchrony.","authors":"Yen-Ju Chen, Zitang Sun, Shin'ya Nishida","doi":"10.1167/jov.24.5.15","DOIUrl":"10.1167/jov.24.5.15","url":null,"abstract":"<p><p>Temporal asynchrony is a cue for the perceptual segregation of spatial regions. Past research found attribute invariance of this phenomenon such that asynchrony induces perceptual segmentation regardless of the changing attribute type, and it does so even when asynchrony occurs between different attributes. To test the generality of this finding and obtain insights into the underlying computational mechanism, we compared the segmentation performance for changes in luminance, color, motion direction, and their combinations. Our task was to detect the target quadrant in which a periodic alternation in attribute was phase-delayed compared to the remaining quadrants. When stimulus elements made a square-wave attribute change, target detection was not clearly attribute invariant, being more difficult for motion direction change than for luminance or color changes and nearly impossible for the combination of motion direction and luminance or color. We suspect that waveform mismatch might cause anomalous behavior of motion direction since a square-wave change in motion direction is a triangular-wave change in the spatial phase (i.e., a second-order change in the direction of the spatial phase change). In agreement with this idea, we found that the segregation performance was strongly affected by the waveform type (square wave, triangular wave, or their combination), and when this factor was controlled, the performance was nearly, though not perfectly, invariant against attribute type. The results were discussed with a model in which different visual attributes share a common asynchrony-based segmentation mechanism.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 5","pages":"15"},"PeriodicalIF":1.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11146091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maka Malania, Yih-Shiuan Lin, Charlotte Hörmandinger, John S Werner, Mark W Greenlee, Tina Plank
{"title":"Training-induced changes in population receptive field properties in visual cortex: Impact of eccentric vision training on population receptive field properties and the crowding effect.","authors":"Maka Malania, Yih-Shiuan Lin, Charlotte Hörmandinger, John S Werner, Mark W Greenlee, Tina Plank","doi":"10.1167/jov.24.5.7","DOIUrl":"10.1167/jov.24.5.7","url":null,"abstract":"<p><p>This study aimed to investigate the impact of eccentric-vision training on population receptive field (pRF) estimates to provide insights into brain plasticity processes driven by practice. Fifteen participants underwent functional magnetic resonance imaging (fMRI) measurements before and after behavioral training on a visual crowding task, where the relative orientation of the opening (gap position: up/down, left/right) in a Landolt C optotype had to be discriminated in the presence of flanking ring stimuli. Drifting checkerboard bar stimuli were used for pRF size estimation in multiple regions of interest (ROIs): dorsal-V1 (dV1), dorsal-V2 (dV2), ventral-V1 (vV1), and ventral-V2 (vV2), including the visual cortex region corresponding to the trained retinal location. pRF estimates in V1 and V2 were obtained along eccentricities from 0.5° to 9°. Statistical analyses revealed a significant decrease of the crowding anisotropy index (p = 0.009) after training, indicating improvement on crowding task performance following training. Notably, pRF sizes at and near the trained location decreased significantly (p = 0.005). Dorsal and ventral V2 exhibited significant pRF size reductions, especially at eccentricities where the training stimuli were presented (p < 0.001). In contrast, no significant changes in pRF estimates were found in either vV1 (p = 0.181) or dV1 (p = 0.055) voxels. These findings suggest that practice on a crowding task can lead to a reduction of pRF sizes in trained visual cortex, particularly in V2, highlighting the plasticity and adaptability of the adult visual system induced by prolonged training.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 5","pages":"7"},"PeriodicalIF":1.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11114612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple object tracking in the presence of a goal: Attentional anticipation and suppression.","authors":"Andrea Frielink-Loing, Arno Koning, Rob van Lier","doi":"10.1167/jov.24.5.10","DOIUrl":"10.1167/jov.24.5.10","url":null,"abstract":"<p><p>In previous studies, we found that tracking multiple objects involves anticipatory attention, especially in the linear direction, even when a target bounced against a wall. We also showed that active involvement, in which the wall was replaced by a controllable paddle, resulted in increased allocation of attention to the bounce direction. In the current experiments, we wanted to further investigate the potential influence of the valence of the heading of an object. In Experiments 1 and 2, participants were instructed to catch targets with a movable goal. In Experiment 3, participants were instructed to manipulate the permeability of a static wall in order to let targets either approach goals (i.e., green goals) or avoid goals (i.e., red goals). The results of Experiment 1 showed that probe detection ahead of a target that moved in the direction of the goal was higher as compared to probe detection in the direction of a no-goal area. Experiment 2 provided further evidence that the attentional highlighting found in the first experiment depends on the movement direction toward the goal. In Experiment 3, we found that not so much the positive (or neutral) valence (here, the green and no-goal areas) led to increased allocation of attention but rather a negative valence (here the red goals) led to a decreased allocation of attention.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 5","pages":"10"},"PeriodicalIF":1.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11129718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}