Vision Research最新文献

筛选
英文 中文
CNN-extracted features generate synthetic fMRI responses to unseen images cnn提取的特征生成对未见图像的合成fMRI响应
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-09-01 Epub Date: 2025-06-21 DOI: 10.1016/j.visres.2025.108641
Parsa Delavari , Leonid Sigal , Ipek Oruc
{"title":"CNN-extracted features generate synthetic fMRI responses to unseen images","authors":"Parsa Delavari ,&nbsp;Leonid Sigal ,&nbsp;Ipek Oruc","doi":"10.1016/j.visres.2025.108641","DOIUrl":"10.1016/j.visres.2025.108641","url":null,"abstract":"<div><div>Inspired by biological vision, convolutional neural networks (CNNs) have tackled challenging image recognition problems once considered the sole purview of human expertise. In turn, CNNs are now widely used as a framework for studying human vision. The organizational similarity between the layers of CNNs and cortical regions along the visual pathway has been shown in studies using human fMRI data, such that early visual areas’ activities are better predicted by the first layers of CNNs while their last layers better predict the response of higher-level visual areas. However, there is a lack of agreement on how well CNN features can predict fMRI responses, particularly in the presence of fMRI noise, which can result in varying brain responses to the repetitions of the same image. Additionally, the utility of these predicted responses to previously unseen images as synthetic fMRI data has not yet been explored. Here we use the BOLD5000 dataset and the AlexNet architecture initialized with the model weights pre-trained on ImageNet to show that features extracted by CNNs can g enerate highly accurate synthetic fMRI responses to images. We demonstrate that synthetic fMRI responses show higher correlations with repetitions of real responses than the real responses themselves, surpassing the quality of real data in the presence of noise. Moreover, we train a decoder with synthetic fMRI data to classify real fMRI data for unseen images and even unseen object categories. Our decoding experiments revealed that the synthetic data outperformed real data, particularly due to the ability to generate larger synthetic datasets. Our findings showcase the high quality of generated synthetic fMRI responses to images based on CNN features, exhibiting both similarities to real data and practical utility in empirical applications.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108641"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grouping strategies in induced perceptual grouping 诱导知觉分组中的分组策略
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-09-01 Epub Date: 2025-06-26 DOI: 10.1016/j.visres.2025.108651
Maria Kon , Gregory Francis
{"title":"Grouping strategies in induced perceptual grouping","authors":"Maria Kon ,&nbsp;Gregory Francis","doi":"10.1016/j.visres.2025.108651","DOIUrl":"10.1016/j.visres.2025.108651","url":null,"abstract":"<div><div>Induced grouping refers to the influence of a perceived group of elements on the grouping of another set of elements that cannot be explained by other grouping principles. Vickery (2008) first highlighted this phenomenon and, despite convincing demonstrations of this principle, seems to be the only direct study. Here we report two successful large sample replications of one of Vickery’s experiments. We also explain Vickery’s results with a cortical model of visual grouping and selection. We extended a previous model, so that it performs a feature-based search of an image for a target. We show that induced grouping effects are the result of a connection strategy that links together target pairs in a visual search task combined with a selection strategy that tends to place a selection signal at locations close to the target pair features. These strategies interact because the connection strategy that links target pairs also sometimes links inducing elements, thereby influencing the selection signal location. The model extension plays a key role in explaining this phenomenon and enables the model to simulate other tasks, like visual search, where the observer uses a dynamic and feature-guided selection process.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108651"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal dynamics of perceived motion flow in naturalistic movie sequences 自然主义电影序列中感知运动流的时间动态
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-09-01 Epub Date: 2025-06-05 DOI: 10.1016/j.visres.2025.108629
Yung-Hao Yang , Taiki Fukiage , Zitang Sun , Shin’ya Nishida
{"title":"Temporal dynamics of perceived motion flow in naturalistic movie sequences","authors":"Yung-Hao Yang ,&nbsp;Taiki Fukiage ,&nbsp;Zitang Sun ,&nbsp;Shin’ya Nishida","doi":"10.1016/j.visres.2025.108629","DOIUrl":"10.1016/j.visres.2025.108629","url":null,"abstract":"<div><div>To visualize the temporal dynamics of human visual motion perception under conditions close to everyday life, we measured the time course of the perceived motion vector when the observers viewed naturalistic movie clips that featured large direction changes of target objects. The spatiotemporal position of the target’s local motion was probed by a flashing dot that appeared within the range of ± 66.7 ms from the direction change. Observers had to report the perceived local motion using a motion vector matching method. The results show that the deviation of the perceived flow from the physical ground truth increased when the probe was presented near the direction changes. The pattern of errors averaged across multiple trials could be described by a Gaussian temporal smoothing of the local motion vectors, with a window size spanning about 120 ms. The results are consistent with previous reports of the sluggish temporal response of visual motion processing revealed by artificial stimuli and different tasks. However, a detailed examination of the response of each trial of our data indicated that the observers did not report a gradual transition from pre-change to post-change vectors but reported either of the two directions in a bimodal fashion with gradually changing response rates over time. This suggests that even though the motion processing may be sluggish, human observers correctly recognize a sudden direction change as a sudden event, not as a gradual transition.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108629"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When children get the gist: The development of rapid scene categorisation 当孩子明白要点时:快速场景分类的发展
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-08-01 Epub Date: 2025-05-17 DOI: 10.1016/j.visres.2025.108620
Elizabeth A.G. Watson, Louise Ewing, George L. Malcolm
{"title":"When children get the gist: The development of rapid scene categorisation","authors":"Elizabeth A.G. Watson,&nbsp;Louise Ewing,&nbsp;George L. Malcolm","doi":"10.1016/j.visres.2025.108620","DOIUrl":"10.1016/j.visres.2025.108620","url":null,"abstract":"<div><div>Research surrounding adult recognition of scene gist is extensive; however, very little is known of its development. Behavioural studies of scene processing tend to broadly support a protracted developmental trajectory, with a quantitative and perhaps also qualitative shift towards more adultlike processing across middle childhood. Here we sought to better understand the very early stages of children’s scene processing by targeting gist perception. Children aged 5–10 years categorised backwards-masked scenes presented at very brief durations. We drew inferences about the <em>processing speed</em> with which each age group extracted category-diagnostic information by varying presentation durations, and the <em>quality of information</em> extracted by varying the level they were prompted to make their judgments (superordinate-level indicative of coarse global information, basic-level indicative of more detailed information). Children across all ages demonstrated a remarkably sophisticated ability to extract scene gist, with 5–6-year-old children performing above chance for scenes presented for as little as 32 ms for both superordinate and basic-level judgements. Categorisation performance also became more efficient with age. Overall, our novel findings indicate that young children possess an impressive ability to process a scene’s gist, which is followed by a protracted development towards expertise across middle childhood.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"233 ","pages":"Article 108620"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of brief and prolonged blur adaptation on visual search and discrimination 短时和长时间模糊适应对视觉搜索和识别的影响
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-08-01 Epub Date: 2025-05-15 DOI: 10.1016/j.visres.2025.108619
Idris Shareef , Nasif Zaman , Michael Webster , Alireza Tavakkoli , Fang Jiang
{"title":"Effects of brief and prolonged blur adaptation on visual search and discrimination","authors":"Idris Shareef ,&nbsp;Nasif Zaman ,&nbsp;Michael Webster ,&nbsp;Alireza Tavakkoli ,&nbsp;Fang Jiang","doi":"10.1016/j.visres.2025.108619","DOIUrl":"10.1016/j.visres.2025.108619","url":null,"abstract":"<div><div>Adaptation to blurred or sharpened images has a large and rapid effect on perceived image focus and at longer durations has also been reported to impact acuity and blur sensitivity, but the dynamics and functional consequences of the adaptation remain poorly characterized. We tested the effects of blur adaptation on visual performance for two tasks and two adapting durations. Specifically, we measured the effects of brief (12 s) vs. prolonged (2 h) blur adaptation on visual search and discrimination performance. Our results show that adaptation improved search accuracies for novel blur levels after prolonged but not brief adapting duration, while neither duration improved blur discrimination. The improvements in visual search could reflect the role of adaptation in increasing the salience of novel properties in the visual environment.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"233 ","pages":"Article 108619"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143946653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of repeated stimulus presentations on Oppel-Kundt and Müller-Lyer illusions 重复刺激呈现对Oppel-Kundt和m<s:1> ller- lyer错觉的影响
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-08-01 Epub Date: 2025-05-16 DOI: 10.1016/j.visres.2025.108621
Tadas Surkys, Arūnas Bielevičius, Vilius Marma
{"title":"Effects of repeated stimulus presentations on Oppel-Kundt and Müller-Lyer illusions","authors":"Tadas Surkys,&nbsp;Arūnas Bielevičius,&nbsp;Vilius Marma","doi":"10.1016/j.visres.2025.108621","DOIUrl":"10.1016/j.visres.2025.108621","url":null,"abstract":"<div><div>We present a post-hoc analysis examining whether repeated trials and multiple sessions affect the measured strength of two length illusions, Oppel-Kundt (O-K) and Müller-Lyer (M-L), as well as a non-illusory control stimulus. Data were taken from earlier studies in which participants adjusted a variable segment to match a reference segment. Short sessions featured five trials per figure, whereas extended sessions contained 24 or 26 trials for O-K and M-L. Linear mixed-effects models tested how trial number, session number, initial interval length, and figure subtype influenced illusion magnitude. O-K illusion magnitude tended to decline across sessions in the extended sessions, although individual observers displayed varying trends—some increased, others decreased. M-L illusions did not show the pronounced adaptation reported in previous work. These findings highlight the need to manage repeated presentations: controlling viewing durations, inter-session intervals, and participant strategies can help limit adaptation. Methodologically, mixing different stimuli, randomizing key parameters, and scheduling breaks appear to yield more stable measurements of illusion magnitudes.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"233 ","pages":"Article 108621"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conducting online visual psychophysics experiments: A replication assessment of two face processing studies 进行在线视觉心理物理学实验:两个面部处理研究的复制评估
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-08-01 Epub Date: 2025-05-12 DOI: 10.1016/j.visres.2025.108617
Caroline Blais , Daniel Fiset , Laurianne Côté, Vicki Ledrou-Paquet, Isabelle Charbonneau
{"title":"Conducting online visual psychophysics experiments: A replication assessment of two face processing studies","authors":"Caroline Blais ,&nbsp;Daniel Fiset ,&nbsp;Laurianne Côté,&nbsp;Vicki Ledrou-Paquet,&nbsp;Isabelle Charbonneau","doi":"10.1016/j.visres.2025.108617","DOIUrl":"10.1016/j.visres.2025.108617","url":null,"abstract":"<div><div>In vision sciences, researchers rigorously control the testing environment and the physical properties of stimuli, making it challenging to conduct visual perception experiments online. However, online research offers key advantages, including access to larger and more diverse participant samples, helping to address the problem of underpowered studies and to enhance the generalizability of results. In face recognition research, increasing diversity is essential, especially considering evidence that cultural and geographical factors influence basic visual face processing. The present study tested a new online platform, Pack &amp; Go from VPixx Technologies, that supports experiments written in MATLAB and Python. Two face recognition experiments based on a data-driven psychophysical method involving real-time stimulus manipulation and relying on functions from the Psychtoolbox were tested. In Experiment 1, the visual information used for face recognition was compared across four conditions that gradually reduced experimental control over the testing environment and stimulus properties. In Experiment 2, the association between face recognition abilities and information utilization was measured online and compared to lab-based results. In both experiments, results obtained in the lab and online were highly similar, demonstrating the potential of online research for vision science.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"233 ","pages":"Article 108617"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143936165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the subtypes of visual hypersensitivity: Four coherent factors and their measurement with the Cardiff Hypersensitivity Scale (CHYPS) 了解视觉过敏的亚型:四个相关因素及其与卡迪夫过敏量表(CHYPS)的测量
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-08-01 Epub Date: 2025-05-19 DOI: 10.1016/j.visres.2025.108610
Alice Price, Petroc Sumner, Georgina Powell
{"title":"Understanding the subtypes of visual hypersensitivity: Four coherent factors and their measurement with the Cardiff Hypersensitivity Scale (CHYPS)","authors":"Alice Price,&nbsp;Petroc Sumner,&nbsp;Georgina Powell","doi":"10.1016/j.visres.2025.108610","DOIUrl":"10.1016/j.visres.2025.108610","url":null,"abstract":"<div><div>Subjective visual sensitivity or discomfort has been reported in many separate literatures, and includes a wide range of visual triggers (e.g., repeating patterns, bright lights, motion, flicker) across a wide range of neurological, psychiatric, mental health, and developmental conditions and areas of neurodiversity (e.g., migraine, traumatic brain injury, functional neurological disorder, PPPD, PTSD, anxiety, depression, anorexia, OCD, autism, ADHD, dyslexia, dyspraxia, synaesthesia). To unite this research across disciplines and to allow progress in mechanistic understanding, we aimed to provide a definitive answer to whether there are different subtypes (factors) of visual hypersensitivity. In Study 1, we generated questions from a large qualitative dataset (n = 765), existing literatures, questionnaires, and iteratively from participant feedback. We found four theoretically coherent factors replicated across five cohorts (n’s = 349, 517, 349, 417, 797 and 1817). These factors were: brightness (e.g., sunlight), repeating patterns (e.g., stripes), strobing (e.g., flashing, screen motion), and intense visual environments (e.g., supermarkets, traffic). There was also a general factor. Based on this we produced a novel 20-item questionnaire (the Cardiff Hypersensitivity Scale, CHYPS), with good reliability (<strong>α</strong> &gt; 0.8, ω &gt; 0.8) and convergent validity (correlations with other visual scales r &gt; 0.6). We discuss how these factors can be related to causal theories of hypersensitivity.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"233 ","pages":"Article 108610"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144083934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does pseudoneglect influence pupillary light or dark response? 假性忽视是否影响瞳孔明暗反应?
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-07-01 Epub Date: 2025-05-09 DOI: 10.1016/j.visres.2025.108618
Wiktor Więcławski, Aleksandra Smus Marek Binder
{"title":"Does pseudoneglect influence pupillary light or dark response?","authors":"Wiktor Więcławski,&nbsp;Aleksandra Smus Marek Binder","doi":"10.1016/j.visres.2025.108618","DOIUrl":"10.1016/j.visres.2025.108618","url":null,"abstract":"<div><div>Pupillary light response (PLR) is modulated by the allocation of spatial attention. Larger pupil constrictions for bright stimuli presented on the left side are considered indicative of pseudoneglect, a subtle attentional bias observed in neurotypical populations. This study aimed to replicate this effect using the split-screen method—a newly introduced measure of spatial attentional bias—while accounting for factors such as contraction anisocoria by recording from both pupils. Additionally, we introduced conditions with and without competing stimuli (a black patch on the opposite side to the original white patch that is supposed to elicit pupil contraction) to investigate the role of visual competition in PLR modulation and explored the pupillary dark response (PDR) to assess whether attentional biases affect pupil dilation. Contrary to our hypothesis, we did not observe a significant pseudoneglect effect, as pupil constriction was not consistently greater for left-sided stimuli. We found clear evidence for contraction anisocoria, whereby ipsilateral stimuli produce stronger constrictions than contralateral stimuli, thus highlighting the need to account for this physiological effect in future studies. Regarding PDR, we did not find significant attentional modulation or evidence of dilation anisocoria as pupil dilation amplitudes were similar across both hemifields. These findings suggest that although the split-screen method may reveal physiological asymmetries like anisocoria, its sensitivity to attentional biases in neurotypical populations still requires further investigation.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"232 ","pages":"Article 108618"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143928327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could an auditory equivalent to a continuously varying visual stimulus improve beat synchronization? Evidence supporting vision as a trustworthy modality in sensorimotor timing 听觉等效的连续变化的视觉刺激能改善节拍同步吗?支持视觉是感觉运动计时中可靠的模式的证据
IF 1.5 4区 心理学
Vision Research Pub Date : 2025-07-01 Epub Date: 2025-05-08 DOI: 10.1016/j.visres.2025.108616
Yingyu Huang , Chaolun Wang , Xiang Wu
{"title":"Could an auditory equivalent to a continuously varying visual stimulus improve beat synchronization? Evidence supporting vision as a trustworthy modality in sensorimotor timing","authors":"Yingyu Huang ,&nbsp;Chaolun Wang ,&nbsp;Xiang Wu","doi":"10.1016/j.visres.2025.108616","DOIUrl":"10.1016/j.visres.2025.108616","url":null,"abstract":"<div><div>Although synchronization to a perceived regular beat in time has been established to be much less variable for discrete auditory stimuli, such as tones, than for discrete visual stimuli, such as flashes, recent advances in beat synchronization research have demonstrated that visual beat synchronization can be substantially improved and become comparable to auditory tones when employing continuously moving visual stimuli. It therefore has been suggested that the difference in modality is an important but not necessarily dominant factor for beat synchronization. However, doubts exist in favoring auditory dominance, as comparing continuously varying visual stimuli with discrete auditory stimuli is considered unfair. Here, based on a periodically contracting ring for which the spatial displacement continuously varied with a constant acceleration, we devised an equivalent in the auditory domain: an amplitude-modulated sound whose amplitude continuously varied with an acceleration of the same magnitude. The results showed that beat synchronization performance of the amplitude-modulated sound was not greater than that of the tone or the contracting ring. The present finding supports that vision is a trustworthy modality for sensorimotor timing processing.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"232 ","pages":"Article 108616"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143921786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书