Attention Perception & Psychophysics最新文献

筛选
英文 中文
No evidence for a privileged role of global ensemble statistics in rapid scene perception: A registered replication attempt 没有证据表明全局集合统计在快速场景感知中的特殊作用:一个注册的复制尝试。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-12-10 DOI: 10.3758/s13414-024-02994-4
Jiongtian Guo, Jay Pratt, Dirk B. Walther
{"title":"No evidence for a privileged role of global ensemble statistics in rapid scene perception: A registered replication attempt","authors":"Jiongtian Guo,&nbsp;Jay Pratt,&nbsp;Dirk B. Walther","doi":"10.3758/s13414-024-02994-4","DOIUrl":"10.3758/s13414-024-02994-4","url":null,"abstract":"<div><p>The nature of visual processes underlying scene perception remains a hotly debated topic. According to one view, scene and object perception rely on similar neural mechanisms, and their processing pathways are tightly interlinked. According to another, scene gist might follow a separate pathway, relying primarily on global image properties. Recently, this latter idea has been supported with a set of experiments using content priming as a probe into scene and object perception (Brady et al. <i>Journal of Experimental Psychology: Human Perception and Performance</i>, <i>43</i>, 1160–1176, 2017). The experiments have shown that preserving only structureless global ensemble texture information in the images of scenes could support rapid scene perception; however, preserving the same information in the images of objects failed to support object perception. We were intrigued by these results, since they are at odds with findings showing that scene content is primarily carried by the explicit encoding of scene structure as represented, for instance, by contours and their properties. In an attempt to reconcile these results, we attempted to replicate the experiments. In our replication experiment, we failed to find any evidence for a privileged use of texture information for scene as opposed to object primes. We conclude that there is no sufficient evidence for any fundamental differences in the processing pathways for object and scene perception: both rely on structural features that describe spatial relationships between constituent parts as well as texture information. To address this issue in the most rigorous manner possible, we here present the results of both a pilot experiment and a pre-registered replication attempt.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"685 - 697"},"PeriodicalIF":1.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Blindfold Test: Helping to decide whether an effect reflects visual processing or higher-level judgment 蒙眼测试:帮助判断一个效果是否反映了视觉处理或更高层次的判断。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-12-04 DOI: 10.3758/s13414-024-02939-x
Benjamin F. van Buren, Brian J. Scholl
{"title":"The Blindfold Test: Helping to decide whether an effect reflects visual processing or higher-level judgment","authors":"Benjamin F. van Buren,&nbsp;Brian J. Scholl","doi":"10.3758/s13414-024-02939-x","DOIUrl":"10.3758/s13414-024-02939-x","url":null,"abstract":"<div><p>Experimenters often ask subjects to rate displays in terms of high-level visual properties, such as animacy. When do such studies measure subjects’ visual impressions, and when do they merely reflect their judgments that certain features <i>should</i> indicate animacy? Here we introduce the ‘Blindfold Test’ for helping to evaluate the evidence for whether an effect reflects perception or judgment. If the same effect can be obtained not only with visual displays but also by simply <i>describing</i> those displays, then subjects’ responses may reflect higher-level reasoning rather than visual processing—and so other evidence is needed in order to support a ‘perceptual’ interpretation. We applied the Blindfold Test to three past studies in which observers made subjective reports about what they were seeing. In the first two examples, subjects rated stimuli in terms of high-level properties: animacy and physical forces. In both cases, the key findings replicated even when the visual stimuli were replaced with (mere) descriptions, and we conclude that these studies cannot by themselves license conclusions about perception. In contrast, a third example (involving motion-induced blindness) passed the test: subjects produced very different responses when given descriptions of the displays, compared to the visual stimuli themselves—providing compelling evidence that the original responses did not merely reflect such higher-level reasoning. The Blindfold Test may thus help to constrain interpretations of the mental processes underlying certain experimental results—especially for studies of properties that can be apprehended by both seeing and thinking.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"445 - 457"},"PeriodicalIF":1.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble representation of animacy could be based on mid-level visual features 动画的集合表示可以基于中级视觉特征。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-12-04 DOI: 10.3758/s13414-024-02976-6
Natalia A. Tiurina, Yuri A. Markov
{"title":"Ensemble representation of animacy could be based on mid-level visual features","authors":"Natalia A. Tiurina,&nbsp;Yuri A. Markov","doi":"10.3758/s13414-024-02976-6","DOIUrl":"10.3758/s13414-024-02976-6","url":null,"abstract":"<div><p>Studies suggest that mid-level features could underlie object animacy perception. In the current research, we tested whether ensemble animacy perception is based on high- or mid-level features. We used five types of images of animals and inanimate objects: color, grayscale, silhouettes, texforms – unrecognizable images that preserve mid-level texture and shape information – and scrambled images. In the series of Experiments 1, we asked participants to evaluate the animacy of single images and sets of eight images using a 10-point scale. In the series of Experiments 2, participants were shown two sets of eight images and had to choose a more animate one in the two-alternative forced-choice (2AFC) task. We found that in both paradigms, observers could report the mean animacy of the set of texform images without direct access to information about high-level features. Thus, ensemble animacy could be extracted only based on mid-level features such as shape and texture without access to more high-level information.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"415 - 430"},"PeriodicalIF":1.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal mechanisms underlying visual processing bias in peri-hand space 手周围空间视觉加工偏差的时间机制。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-12-03 DOI: 10.3758/s13414-024-02980-w
Ankit Maurya, Anuj Shukla, Tony Thomas
{"title":"Temporal mechanisms underlying visual processing bias in peri-hand space","authors":"Ankit Maurya,&nbsp;Anuj Shukla,&nbsp;Tony Thomas","doi":"10.3758/s13414-024-02980-w","DOIUrl":"10.3758/s13414-024-02980-w","url":null,"abstract":"<div><p>The immediate space surrounding the hands has often been termed the peri-hand space (PHS), and is characterized by a smaller reaction time (RT), better detection, and enhanced accuracy for stimuli presented in this space, relative to those stimuli presented beyond this space. Such behavioral changes have been explained in terms of a biased allocation of cognitive resources such as perception, attention, and memory, for the efficient processing of information presented in the PHS. However, in two experiments, the current study shows that these cognitive biases seem to have an underlying temporal basis. The first experiment requires participants to perform a temporal bisection task, whereas the second experiment requires them to perform a verbal estimation task when stimuli are presented either near the hands or relatively far. Results from both experiments give evidence for slowing down of temporal mechanisms in the PHS – reflected in the form of temporal dilation for stimuli presented in the PHS relative to those presented further away. The slowing down of time in the PHS seems crucial in giving sufficient temporal allowance for the allocation of cognitive resources to prioritize the processing of information in the PHS. The findings are in line with the early anticipatory mechanisms associated with the PHS and seem to be driven by the switch/gate mechanism, and not the pacemaker component of the attentional gate model of time perception. Thus, the current study tries to integrate the theories of time perception with the peripersonal space literature.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 8","pages":"2659 - 2671"},"PeriodicalIF":1.7,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target selection during “snapshot” foraging 在 "快照 "觅食过程中选择目标。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-11-27 DOI: 10.3758/s13414-024-02988-2
Sofia Tkhan Tin Le, Árni Kristjánsson, W. Joseph MacInnes
{"title":"Target selection during “snapshot” foraging","authors":"Sofia Tkhan Tin Le,&nbsp;Árni Kristjánsson,&nbsp;W. Joseph MacInnes","doi":"10.3758/s13414-024-02988-2","DOIUrl":"10.3758/s13414-024-02988-2","url":null,"abstract":"<div><p>While previous foraging studies have identified key variables that determine attentional selection, they are affected by the global statistics of the tasks. In most studies, targets are selected one at a time without replacement while distractor numbers remain constant, steadily reducing the ratios of targets to distractors with every selection. We designed a foraging task with a sequence of local “snapshots” of foraging displays, with each snapshot requiring a target selection. This enabled tighter control of local target and distractor type ratios while maintaining the flavor of a sequential, multiple-target foraging task. Observers saw only six items for each target selection during a “snapshot” containing varying numbers of two target types and two distractor types. After each selection, a new six-item array (the following snapshot) immediately appeared, centered on the locus of the last selected target. We contrasted feature-based and conjunction-based foraging and analyzed the data by the proportion of different target types in each trial. We found that target type proportion affected selection, with longer response times during conjunction foraging when the number of the alternate target types was greater than the repeated target types. In addition, the choice of target in each snapshot was influenced by the relative positions of selected targets and distractors during preceding snapshots. Importantly, this shows to what degree previous findings on foraging can be attributed to changing global statistics of the foraging array. We propose that “snapshot foraging” can increase experimental control in understanding how people choose targets during continuous attentional orienting.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 8","pages":"2778 - 2793"},"PeriodicalIF":1.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parafoveal N400 effects reveal that word skipping is associated with deeper lexical processing in the presence of context-driven expectations 视网膜旁的 N400 效应表明,在上下文预期驱动的情况下,单词跳读与更深层次的词汇加工有关。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-11-20 DOI: 10.3758/s13414-024-02984-6
Sara Milligan, Milca Jaime Brunet, Neslihan Caliskan, Elizabeth R. Schotter
{"title":"Parafoveal N400 effects reveal that word skipping is associated with deeper lexical processing in the presence of context-driven expectations","authors":"Sara Milligan,&nbsp;Milca Jaime Brunet,&nbsp;Neslihan Caliskan,&nbsp;Elizabeth R. Schotter","doi":"10.3758/s13414-024-02984-6","DOIUrl":"10.3758/s13414-024-02984-6","url":null,"abstract":"<div><p>Readers are able to begin processing upcoming words before directly fixating them, and in some cases skip words altogether (i.e., never fixated). However, the exact mechanisms and recognition thresholds underlying skipping decisions are not entirely clear. In the current study, we test whether skipping decisions reflect instances of more extensive lexical processing by recording neural language processing (via electroencephalography; EEG) and eye movements simultaneously, and we split trials based on target word-skipping behavior. To test lexical processing of the words, we manipulated the orthographic and phonological relationship between upcoming preview words and a semantically correct (and in some cases, expected) target word using the gaze-contingent display change paradigm. We also manipulated the constraint of the sentences to investigate the extent to which the identification of sublexical features of words depends on a reader’s expectations. We extracted fixation-related brain potentials (FRPs) during the fixation on the preceding word (i.e., in response to parafoveal viewing of the manipulated previews). We found that word skipping is associated with larger neural responses (i.e., N400 amplitudes) to semantically incongruous words that did not share a phonological representation with the correct word, and this effect was only observed in high-constraint sentences. These findings suggest that word skipping <i>can</i> be reflective of more extensive linguistic processing, but in the absence of expectations, word skipping may occur based on less fine-grained linguistic processing and be more reflective of identification of plausible or expected sublexical features rather than higher-level lexical processing (e.g., semantic access).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 1","pages":"76 - 93"},"PeriodicalIF":1.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disentangling decision errors from action execution in mouse-tracking studies: The case of effect-based action control. 将小鼠追踪研究中的决策错误与行动执行区分开来:基于效果的行动控制案例
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-11-20 DOI: 10.3758/s13414-024-02974-8
Solveig Tonn, Moritz Schaaf, Wilfried Kunde, Roland Pfister
{"title":"Disentangling decision errors from action execution in mouse-tracking studies: The case of effect-based action control.","authors":"Solveig Tonn, Moritz Schaaf, Wilfried Kunde, Roland Pfister","doi":"10.3758/s13414-024-02974-8","DOIUrl":"https://doi.org/10.3758/s13414-024-02974-8","url":null,"abstract":"<p><p>Mouse-tracking is regarded as a powerful technique to investigate latent cognitive and emotional states. However, drawing inferences from this manifold data source carries the risk of several pitfalls, especially when using aggregated data rather than single-trial trajectories. Researchers might reach wrong conclusions because averages lump together two distinct contributions that speak towards fundamentally different mechanisms underlying between-condition differences: influences from online-processing during action execution and influences from incomplete decision processes. Here, we propose a simple method to assess these factors, thus allowing us to probe whether process-pure interpretations are appropriate. By applying this method to data from 12 published experiments on ideomotor action control, we show that the interpretation of previous results changes when dissociating online processing from decision and initiation errors. Researchers using mouse-tracking to investigate cognition and emotion are therefore well advised to conduct detailed trial-by-trial analyses, particularly when they test for direct leakage of ongoing processing into movement trajectories.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to: On the relationship between spatial attention and semantics in the context of a Stroop paradigm. 更正:斯特罗普范式下空间注意力与语义之间的关系。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-11-19 DOI: 10.3758/s13414-024-02987-3
Derek Besner, Torin Young
{"title":"Correction to: On the relationship between spatial attention and semantics in the context of a Stroop paradigm.","authors":"Derek Besner, Torin Young","doi":"10.3758/s13414-024-02987-3","DOIUrl":"10.3758/s13414-024-02987-3","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can the left hand benefit from being right? The influence of body side on perceived grasping ability 左手能从右手中获益吗?体侧对感知抓握能力的影响
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-11-18 DOI: 10.3758/s13414-024-02983-7
Rachael L. Taylor, Neil McLatchie, Sally A. Linkenauger
{"title":"Can the left hand benefit from being right? The influence of body side on perceived grasping ability","authors":"Rachael L. Taylor,&nbsp;Neil McLatchie,&nbsp;Sally A. Linkenauger","doi":"10.3758/s13414-024-02983-7","DOIUrl":"10.3758/s13414-024-02983-7","url":null,"abstract":"<div><p>Right-handed individuals (RHIs) demonstrate perceptual biases towards their right hand, estimating it to be larger and longer than their left. In addition, RHIs estimate that they can grasp larger objects with their right hand than their left. This study investigated whether visual information specifying handedness enhances biases in RHIs’ perceptions of their action capabilities. Twenty-two participants were placed in an immersive virtual environment in which self-animated, virtual hands were either presented congruently to their physical hand or mirrored. Following a calibration task, participants estimated their maximum grasp size by adjusting the size of a virtual block until it reached the largest size they thought they could grasp. The results showed that, consistent with research outside of virtual reality, RHIs gave larger estimates of maximum grasp when using their right physical hand than their left. However, this difference remained regardless of how the hand was virtually presented. This finding suggests that proprioceptive feedback may be more important than visual feedback when estimating maximum grasp. In addition, visual feedback on handedness does not appear to enhance biases in perceptions of maximum grasp with the right hand. Considerations for further research into the embodiment of mirrored virtual limbs are discussed.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 8","pages":"2834 - 2843"},"PeriodicalIF":1.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02983-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions 双人互动中的注视-动作耦合、注视-姿态耦合以及注视的外源性吸引。
IF 1.7 4区 心理学
Attention Perception & Psychophysics Pub Date : 2024-11-18 DOI: 10.3758/s13414-024-02978-4
Roy S. Hessels, Peitong Li, Sofia Balali, Martin K. Teunisse, Ronald Poppe, Diederick C. Niehorster, Marcus Nyström, Jeroen S. Benjamins, Atsushi Senju, Albert A. Salah, Ignace T. C. Hooge
{"title":"Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions","authors":"Roy S. Hessels,&nbsp;Peitong Li,&nbsp;Sofia Balali,&nbsp;Martin K. Teunisse,&nbsp;Ronald Poppe,&nbsp;Diederick C. Niehorster,&nbsp;Marcus Nyström,&nbsp;Jeroen S. Benjamins,&nbsp;Atsushi Senju,&nbsp;Albert A. Salah,&nbsp;Ignace T. C. Hooge","doi":"10.3758/s13414-024-02978-4","DOIUrl":"10.3758/s13414-024-02978-4","url":null,"abstract":"<div><p>In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner’s actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person’s gaze and another person’s manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person’s actions. When trying to infer gaze location from one’s own manual actions, gestures, or speech or that of the other person, only one’s own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human–robot interaction.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 8","pages":"2761 - 2777"},"PeriodicalIF":1.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02978-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信