Journal of Vision最新文献

筛选
英文 中文
The effects of simulated central and peripheral vision loss on naturalistic search. 模拟中枢和周边视力丧失对自然搜索的影响。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.6
Kirsten Veerkamp, Daniel Müller, Gwyneth A Pechler, David L Mann, Christian N L Olivers
{"title":"The effects of simulated central and peripheral vision loss on naturalistic search.","authors":"Kirsten Veerkamp, Daniel Müller, Gwyneth A Pechler, David L Mann, Christian N L Olivers","doi":"10.1167/jov.25.8.6","DOIUrl":"10.1167/jov.25.8.6","url":null,"abstract":"<p><p>Worldwide, millions of people experience central or peripheral vision loss. The consequences on daily visual functioning are not completely known, in particular because previous studies lacked real-life representativeness. Our aim was to examine the effects of simulated central or peripheral impairment on a range of measures underlying performance in a naturalistic visual search task in a three-dimensional (3D) environment. The task was performed in a 3D virtual reality (VR) supermarket environment while being seated in a swivel chair. We used gaze-contingent masks to simulate vision loss. Participants were allocated to one of three conditions: full vision, central vision loss (a 6° mask), or peripheral vision loss (a 6° aperture) in a between-subject design. Each participant performed four search sequences, each consisting of four target products from a memorized shopping list, under varying contrast levels. Besides search time and accuracy, we tracked navigational, oculomotor, head and torso movements to assess which cognitive and motor components contributed to performance differences. Results showed increased task completion times with simulated central and peripheral vision loss, but more so with peripheral loss. With central vision loss, navigation was less efficient and it took longer to verify targets. Furthermore, participants made more and shorter fixations. With peripheral vision loss, navigation was even less efficient, and it took longer to find and verify a target. Additionally, saccadic amplitudes were reduced. Low contrast particularly affected search with peripheral vision loss. Memory failure, indicating cognitive load, did not differ between conditions. Thus we demonstrate that simulations of central and peripheral vision loss lead to differential search profiles in a naturalistic 3D environment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12236627/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Function over form: The temporal evolution of affordance-based scene categorization. 功能重于形式:基于可视性的场景分类的时间演变。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.10
Michelle R Greene, Bruce C Hansen
{"title":"Function over form: The temporal evolution of affordance-based scene categorization.","authors":"Michelle R Greene, Bruce C Hansen","doi":"10.1167/jov.25.8.10","DOIUrl":"10.1167/jov.25.8.10","url":null,"abstract":"<p><p>Humans can rapidly understand and categorize scenes, yet the specific features and mechanisms that enable categorization remain debated. Here, we investigated whether affordances-the possible actions a scene supports-facilitate scene categorization even when other similarly informative features are present. In Experiment 1, we generated triplets of images that were equally dissimilar on one feature dimension (affordances, materials, surfaces) but similar on the remaining two. Using an odd-one-out task, observers consistently chose the image that differed in its affordances as the outlier despite equally large differences in the other dimensions. In Experiment 2, we asked whether shared affordances also interfere with rapid categorization. When distractors shared affordances rather than surface features with a target category, observers committed significantly more false alarms, indicating that functional similarity creates stronger competition during scene categorization. Finally, in Experiment 3, we recorded ERPs to examine the time course of category representations. We used multivariate decoding to assess the quality of scene category representations. We found that both affordance- and surface-similar distractors yielded above-chance decoding starting around 60-70 ms after stimulus. However, the neural discriminability of target categories was reduced in the affordance-similar condition, starting around 150 ms. These findings suggest that affordances carry a privileged status in scene perception, shaping both behavioral category performance and neural processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12248959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Center-surround motion interaction between low and high spatial frequencies under binocular and dichoptic viewing. 双目和双视观察中低和高空间频率的中心-环绕运动相互作用。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.15
Omar Bachtoula, Ignacio Serrano-Pedraza
{"title":"Center-surround motion interaction between low and high spatial frequencies under binocular and dichoptic viewing.","authors":"Omar Bachtoula, Ignacio Serrano-Pedraza","doi":"10.1167/jov.25.8.15","DOIUrl":"https://doi.org/10.1167/jov.25.8.15","url":null,"abstract":"<p><p>Motion discrimination of a stimulus that contains fine features is impaired when static coarser features are added to it. Previous findings have shown that this cross-scale motion interaction occurs under dichoptic presentation, where both components are spatially overlapped. Here, we used a center-surround spatial configuration where both components do not spatially overlap. We measured the strength of this motion interaction by assessing the cancellation speeds (i.e., the speed needed to cancel out the motion discrimination impairment) for different combinations of spatial frequencies, temporal frequencies, contrasts, durations, and under binocular and dichoptic presentations. The experiments revealed that cancellation speed is bandpass tuned to spatial frequency, increases with temporal frequency up to 12 Hz before slightly decreasing, and intensifies with contrast before stabilizing at higher levels. We found similar patterns of results for both dichoptic and binocular presentations, although the interaction was stronger in the binocular condition. These results confirm that this interaction mechanism can integrate fine and coarse scales when presented to different eyes, even when motion signals do not spatially overlap. Finally, we explain the differences between dichoptic and binocular cancellation speeds using a motion-sensing model that includes a cross-scale interaction stage. The model simulations suggest that an interocular gain control, followed by binocular summation and then by cross-scale interaction, accounts for the differences observed between binocular and dichoptic viewing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational evidence for an inverse relationship between retinal and brain complexity. 视网膜和大脑复杂性之间反比关系的计算证据。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.9
Mitchell B Slapik
{"title":"Computational evidence for an inverse relationship between retinal and brain complexity.","authors":"Mitchell B Slapik","doi":"10.1167/jov.25.8.9","DOIUrl":"10.1167/jov.25.8.9","url":null,"abstract":"<p><p>Visual neuroscientists have long observed an inverse relationship between brain and retinal complexity: As brain complexity increases across species, retinas adapt to simpler visual processing. Lindsey et al. previously provided a computational explanation for this pattern, showing that shallow networks encode complex features in their first stage of processing, whereas deep networks encode simpler features. Here, these findings are extended to a suite of representational analyses and show that shallow networks generate high-dimensional representations with linear decision boundaries and specific visual features that can feed directly into behavioral responses. In contrast, deep networks generate low-dimensional representations with nonlinear decision boundaries and general visual features. These representations require further processing before they can produce the appropriate behavioral response. In summary, the findings extend a longstanding principle linking simpler retinal features to complex brains and offer a computational framework for understanding neural network behavior more generally.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The visibility of Eidolon distortions in things and stuff. 幻象的可见性扭曲了事物。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.12
Swantje Mahncke, Lina Eicke-Kanani, Ole Fabritz, Thomas S A Wallis
{"title":"The visibility of Eidolon distortions in things and stuff.","authors":"Swantje Mahncke, Lina Eicke-Kanani, Ole Fabritz, Thomas S A Wallis","doi":"10.1167/jov.25.8.12","DOIUrl":"10.1167/jov.25.8.12","url":null,"abstract":"<p><p>The visibility of alterations to the physical structure of images (distortions) depends on the image content and on viewing conditions. Here we measure human sensitivity to a class of image distortions, Eidolons, applied to image sets containing a range of content, from object images or scenes, to textures and materials. In an odd-one-out task with peripherally presented images, we replicate previous findings that distortions are harder to detect in images which contain large regions of texture or material and fewer segmentable object boundaries. Next, we reason that an image-computable model able to capture the critical aspects of encoding transformations should be able to predict the discriminability of distortion-image pairs, irrespective of image content. We therefore test a variety of image-computable models, treating them as perceptual metrics, using a simple hierarchical regression framework. Of the tested models, the texture statistics of the Portilla and Simoncelli model best predicted performance, beating simple Fourier-spectrum-based transforms and a biologically inspired LGN statistics model. There remains, however, a substantial gap between the best single image-computable metric and an oracle model that has information about the experimental parameters and image labels. This work compliments existing datasets in image distortion discriminability and image quality, and extends existing frameworks for comparatively evaluating the predictive performance of perceptual metrics.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12255176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The racer's gaze: Visual strategy in high-speed sports expertise. 赛车手的凝视:高速运动专业技术中的视觉策略。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.16
Otto Lappi, Jami Pekkanen, Aleksandra Krajnc, Lucas Iacono, Adrian Remonda, Eduardo Veas
{"title":"The racer's gaze: Visual strategy in high-speed sports expertise.","authors":"Otto Lappi, Jami Pekkanen, Aleksandra Krajnc, Lucas Iacono, Adrian Remonda, Eduardo Veas","doi":"10.1167/jov.25.8.16","DOIUrl":"https://doi.org/10.1167/jov.25.8.16","url":null,"abstract":"<p><p>Eye movements shape all visual input to the brain, making their understanding essential for studying perception and visual guidance in dynamic environments. Research on expert performance indicates that gaze coordination is a key feature of expertise in, for example, sports. Mobile eye tracking provides the opportunity to investigate gaze strategies supporting the skilled actions of an athlete and can deliver insight into the underlying perceptual-cognitive processes. We systematically observed the visual strategy of an expert racing driver performing a domain-representative task. Synchronized gaze, telemetry, and localization data from a high-grade simulator were analyzed to address four classes of research questions: oculomotor, scene analysis, timing, and point of vantage. The results (a) replicate the seminal tangent point orientation (pre-turn-in saccades), (b) describe both the oculomotor signature and timing signature of the steering with the head strategy, (c) identify a novel saccade strategy (pre-full-throttle saccades), and (d) reveal a previously unstudied spatial regularity in the serial organization of behavior: a tight localization of the points of vantage where the pre-turn-in saccades and pre-full-throttle saccades are made. The gaze strategies are not tied to specifics of the task and may be relevant for understanding expert performance in other fields with similar visuomotor and cognitive demands. The method of cross-examining an integrated dataset by multiple parametrizations itself complements traditional research designs with predefined task constraints and restrictions. We are not aware of any study that has simultaneously addressed all four kinds of research questions simultaneously.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The pupil response to perceptual switches: What happens when you ignore them. 瞳孔对感知转换的反应:当你忽略它们时会发生什么?
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.5
Bobicheng Zhang, Vasilii Marshev, Jan W Brascamp
{"title":"The pupil response to perceptual switches: What happens when you ignore them.","authors":"Bobicheng Zhang, Vasilii Marshev, Jan W Brascamp","doi":"10.1167/jov.25.8.5","DOIUrl":"10.1167/jov.25.8.5","url":null,"abstract":"<p><p>The pupil has been found to dilate after switches in bistable perception, prompting the suggestion that norepinephrine-based neuromodulation plays a causal role in those switches. However, the pupil dilates in response to task-relevant events in general, and, in existing work, perceptual switches were typically task-relevant (e.g., they had to be reported). As such, observed switch-related dilations may have reflected nonspecific task relevance rather than switch-specific processes. Here, we measured pupil responses to perceptual switches that were task-irrelevant. Observers viewed a rotating structure-from-motion sphere consisting of equilateral triangles that inverted at semi-random intervals. In separate conditions, observers either reported perceptual switches (rendering them task-relevant) or reported changes in the triangles' orientation (rendering the switches task-irrelevant). We then used observers' optokinetic nystagmus to infer perceptual switch moments, even when observers did not report them. Control analyses confirm the reliability of this method. We found that task-relevant switches were followed by pupil dilations, but task-irrelevant ones were not. These results suggest that pupil-associated neuromodulation, although closely linked to task-relevant events, may not have any specific tie with perceptual bistability. These results are consistent with results we recently reported for binocular rivalry, indicating commonality across distinct forms of perceptual bistability.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12236628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring spatial and temporal properties of visual crowding using continuous psychophysics. 用连续心理物理学测量视觉拥挤的时空特性。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-06-02 DOI: 10.1167/jov.25.7.7
Dilce Tanriverdi, Frans W Cornelissen
{"title":"Measuring spatial and temporal properties of visual crowding using continuous psychophysics.","authors":"Dilce Tanriverdi, Frans W Cornelissen","doi":"10.1167/jov.25.7.7","DOIUrl":"10.1167/jov.25.7.7","url":null,"abstract":"<p><p>Visual crowding refers to the difficulty in recognizing objects in the periphery when surrounded by clutter. Traditional trial-based paradigms, while effective in measuring spatial aspects of crowding, do not capture the temporal dynamics involved. In this study, we assessed the feasibility of a continuous psychophysics paradigm that measures both the spatial extent and temporal processes of visual crowding. Eight participants continuously tracked the orientation of a rotating Landolt C while the distance between the target and a ring-shaped flanker varied systematically over time. Participants set a reference stimulus to match the orientation of the target. The paradigm included \"jump-points,\" where the orientation of the target suddenly shifted, allowing us to measure the recovery rate of participants' tracking errors following these disruptions. Tracking accuracy was compared between flanked and isolated conditions. Additionally, participants' report errors were used to assess both the crowding extent and the temporal recovery rate from the jumps, with the crowding extent results compared with those obtained from a conventional trial-based version of the paradigm. The recovery rate was calculated by fitting an exponential decay function to participants' report errors after the jumps. The results showed that the crowding extent measured using the continuous paradigm was consistent with that obtained using trial-based methods and aligned with Bouma's rule. Moreover, flankers decreased both tracking accuracy and recovery rate following the jumps. These results demonstrate that our continuous psychophysics paradigm is useful for measuring the spatiotemporal aspects of crowding.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173087/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144286976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of object-scene congruency with and without awareness. 有意识和无意识情况下物体-场景一致性的影响。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-06-02 DOI: 10.1167/jov.25.7.3
Weina Zhu, Jan Drewes
{"title":"Effects of object-scene congruency with and without awareness.","authors":"Weina Zhu, Jan Drewes","doi":"10.1167/jov.25.7.3","DOIUrl":"10.1167/jov.25.7.3","url":null,"abstract":"<p><p>Scene context has been shown to influence object recognition; it is not clear what level of visual processing is required for this effect to manifest. Specifically, it is unclear if such object/context interactions may exist in the absence of conscious awareness. By conducting experiments with and without the use of continuous flash suppression (CFS), we examined how context (background) congruency affects target recognition and response time. We used animal and vehicle images in natural or man-made scenes, which formed congruent/non-congruent image groups (100 images each). By comparing among three experimental designs (b-CFS, plain 2AFC, and 2AFC-CFS), we found the response time in the congruent scenes was significantly faster than in the incongruent scenes in plain 2AFC (without suppression). This congruency effect persisted only in the vehicle group when under b-CFS suppression. When combining the two paradigms (2AFC-CFS), the results replicated the congruency effect from the plain 2AFC condition. This indicates that the congruency effect does not emerge at the lowest levels of perception, but requires additional processing, necessitating a degree of conscious access.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12161396/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foveal crowding modifies a target's properties under a brief presentation time. 在较短的呈现时间内,中央凹拥挤会改变目标的性质。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-06-02 DOI: 10.1167/jov.25.7.5
Ziv Siman-Tov, Maria Lev, Uri Polat
{"title":"Foveal crowding modifies a target's properties under a brief presentation time.","authors":"Ziv Siman-Tov, Maria Lev, Uri Polat","doi":"10.1167/jov.25.7.5","DOIUrl":"10.1167/jov.25.7.5","url":null,"abstract":"<p><p>The perception of chromatic and achromatic visual information is combined and processed in the parvocellular stream; however, they are separate processes at the early stage of the visual cortex. In our previous study, we noted that there is difficulty discriminating the color of a letter target presented at the fovea under a crowded presentation for a short time. Visual crowding occurs when an easily identified isolated stimulus becomes very difficult to identify when it is surrounded by stimuli with similar properties. One opinion is that crowding reduces the ability to identify the target but not its features (e.g., color and texture); however, some studies indicated that the ability to recognize features is also impaired under peripheral crowding conditions. Here, we investigated whether the processing of chromatic information can be impaired at the fovea using a classic crowding experiment when tested at brief presentation times (20, 40, and 120 ms). The participants reported both the target's identity and chromaticity (dual task). We found that the target's identification and color discrimination are impaired when presented for 20-40 ms but that they recover for longer presentation times. This effect is increased when temporal backward masking is added. This finding suggests that crowding resembles masking under brief presentation times and occurs at a later processing stage, after an initial masking stage.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12166505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信