Advances in Methods and Practices in Psychological Science最新文献

筛选
英文 中文
Your Coefficient Alpha Is Probably Wrong, but Which Coefficient Omega Is Right? A Tutorial on Using R to Obtain Better Reliability Estimates 你的系数可能是错的,但哪个系数是对的呢?关于使用R获得更好的可靠性估计的教程
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-11-06 DOI: 10.1177/2515245920951747
D. Flora
{"title":"Your Coefficient Alpha Is Probably Wrong, but Which Coefficient Omega Is Right? A Tutorial on Using R to Obtain Better Reliability Estimates","authors":"D. Flora","doi":"10.1177/2515245920951747","DOIUrl":"https://doi.org/10.1177/2515245920951747","url":null,"abstract":"Measurement quality has recently been highlighted as an important concern for advancing a cumulative psychological science. An implication is that researchers should move beyond mechanistically reporting coefficient alpha toward more carefully assessing the internal structure and reliability of multi-item scales. Yet a researcher may be discouraged upon discovering that a prominent alternative to alpha, namely, coefficient omega, can be calculated in a variety of ways. In this Tutorial, I alleviate this potential confusion by describing alternative forms of omega and providing guidelines for choosing an appropriate omega estimate pertaining to the measurement of a target construct represented with a confirmatory factor analysis model. Several applied examples demonstrate how to compute different forms of omega in R.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"484 - 501"},"PeriodicalIF":13.6,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920951747","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46361358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 163
The Role of Human Fallibility in Psychological Research: A Survey of Mistakes in Data Management 人的易错性在心理学研究中的作用:数据管理中的错误调查
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-11-05 DOI: 10.1177/25152459211045930
Márton Kovács, Rink Hoekstra, B. Aczel
{"title":"The Role of Human Fallibility in Psychological Research: A Survey of Mistakes in Data Management","authors":"Márton Kovács, Rink Hoekstra, B. Aczel","doi":"10.1177/25152459211045930","DOIUrl":"https://doi.org/10.1177/25152459211045930","url":null,"abstract":"Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42747473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Commentary on Hussey and Hughes (2020): Hidden Invalidity Among 15 Commonly Used Measures in Social and Personality Psychology Hussey和Hughes(2020)述评:社会和人格心理学常用的15种衡量标准中的隐性无效
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-10-15 DOI: 10.1177/2515245920957618
Eunike Wetzel, B. Roberts
{"title":"Commentary on Hussey and Hughes (2020): Hidden Invalidity Among 15 Commonly Used Measures in Social and Personality Psychology","authors":"Eunike Wetzel, B. Roberts","doi":"10.1177/2515245920957618","DOIUrl":"https://doi.org/10.1177/2515245920957618","url":null,"abstract":"Hussey and Hughes (2020) analyzed four aspects relevant to the structural validity of a psychological scale (internal consistency, test-retest reliability, factor structure, and measurement invariance) in 15 self-report questionnaires, some of which, such as the Big Five Inventory ( John & Srivastava, 1999) and the Rosenberg Self-Esteem Scale (Rosenberg, 1965), are very popular. In this Commentary, we argue that (a) their claim that measurement issues like these are ignored is incorrect, (b) the models they used to test structural validity do not match the construct space for many of the measures, and (c) their analyses and conclusions regarding measurement invariance were needlessly limited to a dichotomous decision rule. First, we believe it is important to note that we are in agreement with the sentiment behind Hussey and Hughes’s study and the previous work that appeared to inspire it (Flake, Pek, & Hehman, 2017). Measurement issues are seldom the focus of the articles published in the top journals in personality and social psychology, and the quality of the measures used by researchers is not a top priority in evaluating the value of the research. Furthermore, the use of ad hoc measures is common in some fields. Nonetheless, we disagree with the authors’ analyses, interpretations, and conclusions concerning the validity of these 15 specific measures for the three reasons we discuss here.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"505 - 508"},"PeriodicalIF":13.6,"publicationDate":"2020-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920957618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Statistical Control Requires Causal Justification 统计控制需要因果证明
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-10-13 DOI: 10.1177/25152459221095823
Anna C. Wysocki, K. Lawson, M. Rhemtulla
{"title":"Statistical Control Requires Causal Justification","authors":"Anna C. Wysocki, K. Lawson, M. Rhemtulla","doi":"10.1177/25152459221095823","DOIUrl":"https://doi.org/10.1177/25152459221095823","url":null,"abstract":"It is common practice in correlational or quasiexperimental studies to use statistical control to remove confounding effects from a regression coefficient. Controlling for relevant confounders can debias the estimated causal effect of a predictor on an outcome; that is, it can bring the estimated regression coefficient closer to the value of the true causal effect. But statistical control works only under ideal circumstances. When the selected control variables are inappropriate, controlling can result in estimates that are more biased than uncontrolled estimates. Despite the ubiquity of statistical control in published regression analyses and the consequences of controlling for inappropriate third variables, the selection of control variables is rarely explicitly justified in print. We argue that to carefully select appropriate control variables, researchers must propose and defend a causal structure that includes the outcome, predictors, and plausible confounders. We underscore the importance of causality when selecting control variables by demonstrating how regression coefficients are affected by controlling for appropriate and inappropriate variables. Finally, we provide practical recommendations for applied researchers who wish to use statistical control.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45832531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Persons as Effect Sizes 人作为效应量
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-10-09 DOI: 10.1177/2515245920922982
J. Grice, Eliwid Medellin, Ian Jones, Samantha Horvath, Hailey McDaniel, Chance O’lansen, Meggie Baker
{"title":"Persons as Effect Sizes","authors":"J. Grice, Eliwid Medellin, Ian Jones, Samantha Horvath, Hailey McDaniel, Chance O’lansen, Meggie Baker","doi":"10.1177/2515245920922982","DOIUrl":"https://doi.org/10.1177/2515245920922982","url":null,"abstract":"Traditional indices of effect size are designed to answer questions about average group differences, associations between variables, and relative risk. For many researchers, an additional, important question is, “How many people in my study behaved or responded in a manner consistent with theoretical expectation?” We show how the answer to this question can be computed and reported as a straightforward percentage for a wide variety of study designs. This percentage essentially treats persons as an effect size, and it can easily be understood by scientists, professionals, and laypersons alike. For instance, imagine that in addition to d or η2, a researcher reports that 80% of participants matched theoretical expectation. No statistical training is required to understand the basic meaning of this percentage. By analyzing recently published studies, we show how computing this percentage can reveal novel patterns within data that provide insights for extending and developing the theory under investigation.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"443 - 455"},"PeriodicalIF":13.6,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920922982","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44876063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Boundary Conditions for the Practical Importance of Small Effects in Long Runs: A Comment on Funder and Ozer (2019) 小效应在长期运行中实际重要性的边界条件——评Funder和Ozer (2019)
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-10-07 DOI: 10.1177/2515245920957607
J. Sauer, A. Drummond
{"title":"Boundary Conditions for the Practical Importance of Small Effects in Long Runs: A Comment on Funder and Ozer (2019)","authors":"J. Sauer, A. Drummond","doi":"10.1177/2515245920957607","DOIUrl":"https://doi.org/10.1177/2515245920957607","url":null,"abstract":"Funder and Ozer (2019) argued that small effects can haveimportant implications in cumulative long-run scenarios.We certainly agree. However, some important caveatsmerit explicit consideration. We elaborate on the previously acknowledged importance of preregistration (andopen-data practices) and identify two additional considerations for interpreting small effects in long-run scenarios: restricted extrapolation and construct validity","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"502 - 504"},"PeriodicalIF":13.6,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920957607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49101096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Many Labs 5: Registered Replication of LoBue and DeLoache (2008) 许多实验室5:LoBue和DeLoache的注册复制(2008)
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-09-01 DOI: 10.1177/2515245920953350
L. Lazarević, D. Purić, I. Žeželj, Radomir Belopavlović, Bojana Bodroža, Marija Čolić, C. Ebersole, Máire B Ford, Ana Orlić, Ivana Pedović, B. Petrović, A. Shabazian, Darko Stojilović
{"title":"Many Labs 5: Registered Replication of LoBue and DeLoache (2008)","authors":"L. Lazarević, D. Purić, I. Žeželj, Radomir Belopavlović, Bojana Bodroža, Marija Čolić, C. Ebersole, Máire B Ford, Ana Orlić, Ivana Pedović, B. Petrović, A. Shabazian, Darko Stojilović","doi":"10.1177/2515245920953350","DOIUrl":"https://doi.org/10.1177/2515245920953350","url":null,"abstract":"Across three studies, LoBue and DeLoache (2008) provided evidence suggesting that both young children and adults exhibit enhanced visual detection of evolutionarily relevant threat stimuli (as compared with nonthreatening stimuli). A replication of their Experiment 3, conducted by Cramblet Alvarez and Pipitone (2015) as part of the Reproducibility Project: Psychology (RP:P), demonstrated trends similar to those of the original study, but the effect sizes were smaller and not statistically significant. There were, however, some methodological differences (e.g., screen size) and sampling differences (the age of recruited children) between the original study and the RP:P replication study. Additionally, LoBue and DeLoache expressed concern over the choice of stimuli used in the RP:P replication. We sought to explore the possible moderating effects of these factors by conducting two new replications—one using the protocol from the RP:P and the other using a revised protocol. We collected data at four sites, three in Serbia and one in the United States (total N = 553). Overall, participants were not significantly faster at detecting threatening stimuli. Thus, results were not supportive of the hypothesis that visual detection of evolutionarily relevant threat stimuli is enhanced in young children. The effect from the RP:P protocol (d = −0.10, 95% confidence interval = [−1.02, 0.82]) was similar to the effect from the revised protocol (d = −0.09, 95% confidence interval = [−0.33, 0.15]), and the results from both the RP:P and the revised protocols were more similar to those found by Cramblet Alvarez and Pipitone than to those found by LoBue and DeLoache.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"377 - 386"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920953350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42166078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 5 许多实验室5:Albarracín等人(2008)的注册复制,实验5
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-09-01 DOI: 10.1177/2515245920945963
Christopher R. Chartier, J. Arnal, Holly Arrow, Nicholas G. Bloxsom, D. Bonfiglio, C. Brumbaugh, Katherine S. Corker, C. Ebersole, Alexander Garinther, S. Giessner, Sean Hughes, M. Inzlicht, Hause Lin, Brett Mercier, Mitchell M. Metzger, D. Rangel, Blair Saunders, Kathleen Schmidt, Daniel Storage, Carly Tocco
{"title":"Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 5","authors":"Christopher R. Chartier, J. Arnal, Holly Arrow, Nicholas G. Bloxsom, D. Bonfiglio, C. Brumbaugh, Katherine S. Corker, C. Ebersole, Alexander Garinther, S. Giessner, Sean Hughes, M. Inzlicht, Hause Lin, Brett Mercier, Mitchell M. Metzger, D. Rangel, Blair Saunders, Kathleen Schmidt, Daniel Storage, Carly Tocco","doi":"10.1177/2515245920945963","DOIUrl":"https://doi.org/10.1177/2515245920945963","url":null,"abstract":"In Experiment 5 of Albarracín et al. (2008), participants primed with words associated with action performed better on a subsequent cognitive task than did participants primed with words associated with inaction. A direct replication attempt by Frank, Kim, and Lee (2016) as part of the Reproducibility Project: Psychology (RP:P) failed to find evidence for this effect. In this article, we discuss several potential explanations for these discrepant findings: the source of participants (Amazon’s Mechanical Turk vs. traditional undergraduate-student pool), the setting of participation (online vs. in lab), and the possible moderating role of affect. We tested Albarracín et al.’s original hypothesis in two new samples: For the first sample, we followed the protocol developed by Frank et al. and recruited participants via Amazon’s Mechanical Turk (n = 580). For the second sample, we used a revised protocol incorporating feedback from the original authors and recruited participants from eight universities (n = 884). We did not detect moderation by protocol; patterns in the revised protocol resembled those in our implementation of the RP:P protocol, but the estimate of the focal effect size was smaller than that found originally by Albarracín et al. and larger than that found in Frank et al.’s replication attempt. We discuss these findings and possible explanations.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"332 - 339"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920945963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43659543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 7 许多实验室5:Albarracín等人的注册复制(2008),实验7
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-09-01 DOI: 10.1177/2515245920925750
Katherine S. Corker, J. Arnal, D. Bonfiglio, P. Curran, Christopher R. Chartier, W. Chopik, R. Guadagno, Amanda M. Kimbrough, Kathleen Schmidt, B. Wiggins
{"title":"Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 7","authors":"Katherine S. Corker, J. Arnal, D. Bonfiglio, P. Curran, Christopher R. Chartier, W. Chopik, R. Guadagno, Amanda M. Kimbrough, Kathleen Schmidt, B. Wiggins","doi":"10.1177/2515245920925750","DOIUrl":"https://doi.org/10.1177/2515245920925750","url":null,"abstract":"Albarracín et al. (2008, Experiment 7) tested whether priming action or inaction goals (vs. no goal) and then satisfying those goals (vs. not satisfying them) would be associated with subsequent cognitive responding. They hypothesized and found that priming action or inaction goals that were not satisfied resulted in greater or lesser responding, respectively, compared with not priming goals (N = 98). Sonnleitner and Voracek (2015) attempted to directly replicate Albarracín et al.’s (2008) study with German participants (N = 105). They did not find evidence for the 3 × 2 interaction or the expected main effect of task type. The current study attempted to directly replicate Albarracín et al. (2008), Experiment 7, with a larger sample of participants (N = 1,690) from seven colleges and universities in the United States. We also extended the study design by using a scrambled-sentence task to prime goals instead of the original task of completing word fragments, allowing us to test whether study protocol moderated any effects of interest. We did not detect moderation by protocol in the full 3 × 2 × 2 design (pseudo-r2 = 0.05%). Results for both protocols were largely consistent with Sonnleitner and Voracek’s findings (pseudo-r2s = 0.14% and 0.50%). We consider these results in light of recent findings concerning priming methods and discuss the robustness of action-/inaction-goal priming to the implementation of different protocols in this particular context.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"340 - 352"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920925750","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44432448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Many Labs 5: Registered Replication of Crosby, Monin, and Richardson (2008) 许多实验室5:Crosby, Monin和Richardson的注册复制(2008)
IF 13.6 1区 心理学
Advances in Methods and Practices in Psychological Science Pub Date : 2020-09-01 DOI: 10.1177/2515245919870737
H. Rabagliati, M. Corley, Benjamin R. Dering, P. Hancock, Josiah P J King, C. Levitan, J. Loy, Ailsa E. Millen
{"title":"Many Labs 5: Registered Replication of Crosby, Monin, and Richardson (2008)","authors":"H. Rabagliati, M. Corley, Benjamin R. Dering, P. Hancock, Josiah P J King, C. Levitan, J. Loy, Ailsa E. Millen","doi":"10.1177/2515245919870737","DOIUrl":"https://doi.org/10.1177/2515245919870737","url":null,"abstract":"Crosby, Monin, and Richardson (2008) found that hearing an offensive remark caused subjects (N = 25) to look longer at a potentially offended person, but only if that person could hear the remark. On the basis of this result, they argued that people use social referencing to assess the offensiveness. However, in a direct replication in the Reproducibility Project: Psychology, the result for Crosby et al.’s key effect was not significant. In the current project, we tested whether the size of the social-referencing effect might be increased by a peer-reviewed and preregistered protocol manipulation in which some participants were given context to understand why the remark was potentially offensive. Three labs in Europe and the United States (N = 283) took part. The protocol manipulation did not affect the size of the social-referencing effect. However, we did replicate the original effect reported by Crosby et al., albeit with a much smaller effect size. We discuss these results in the context of ongoing debates about how replication attempts should treat statistical power and contextual sensitivity.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"353 - 365"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245919870737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43192977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信