J. Bottesini, Christie Aschwanden, M. Rhemtulla, S. Vazire
{"title":"科学记者如何评价心理学研究?","authors":"J. Bottesini, Christie Aschwanden, M. Rhemtulla, S. Vazire","doi":"10.1177/25152459231183912","DOIUrl":null,"url":null,"abstract":"What information do science journalists use when evaluating psychology findings? We examined this in a preregistered, controlled experiment by manipulating four factors in descriptions of fictitious behavioral-psychology studies: (a) the study’s sample size, (b) the representativeness of the study’s sample, (c) the p value associated with the finding, and (d) institutional prestige of the researcher who conducted the study. We investigated the effects of these manipulations on 181 real journalists’ perceptions of each study’s trustworthiness and newsworthiness. Sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a finding was; larger sample sizes led to an increase of about two-thirds of 1 point on a 7-point scale. University prestige had no effect in this controlled setting, and the effects of sample representativeness and of p values were inconclusive, but any effects in this setting are likely quite small. Exploratory analyses suggest that other types of prestige might be more important (i.e., journal prestige) and that study design (experimental vs. correlational) may also affect trustworthiness and newsworthiness.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":15.6000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"How Do Science Journalists Evaluate Psychology Research?\",\"authors\":\"J. Bottesini, Christie Aschwanden, M. Rhemtulla, S. Vazire\",\"doi\":\"10.1177/25152459231183912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"What information do science journalists use when evaluating psychology findings? We examined this in a preregistered, controlled experiment by manipulating four factors in descriptions of fictitious behavioral-psychology studies: (a) the study’s sample size, (b) the representativeness of the study’s sample, (c) the p value associated with the finding, and (d) institutional prestige of the researcher who conducted the study. We investigated the effects of these manipulations on 181 real journalists’ perceptions of each study’s trustworthiness and newsworthiness. Sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a finding was; larger sample sizes led to an increase of about two-thirds of 1 point on a 7-point scale. University prestige had no effect in this controlled setting, and the effects of sample representativeness and of p values were inconclusive, but any effects in this setting are likely quite small. Exploratory analyses suggest that other types of prestige might be more important (i.e., journal prestige) and that study design (experimental vs. correlational) may also affect trustworthiness and newsworthiness.\",\"PeriodicalId\":55645,\"journal\":{\"name\":\"Advances in Methods and Practices in Psychological Science\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":15.6000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Methods and Practices in Psychological Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/25152459231183912\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Methods and Practices in Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/25152459231183912","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
How Do Science Journalists Evaluate Psychology Research?
What information do science journalists use when evaluating psychology findings? We examined this in a preregistered, controlled experiment by manipulating four factors in descriptions of fictitious behavioral-psychology studies: (a) the study’s sample size, (b) the representativeness of the study’s sample, (c) the p value associated with the finding, and (d) institutional prestige of the researcher who conducted the study. We investigated the effects of these manipulations on 181 real journalists’ perceptions of each study’s trustworthiness and newsworthiness. Sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a finding was; larger sample sizes led to an increase of about two-thirds of 1 point on a 7-point scale. University prestige had no effect in this controlled setting, and the effects of sample representativeness and of p values were inconclusive, but any effects in this setting are likely quite small. Exploratory analyses suggest that other types of prestige might be more important (i.e., journal prestige) and that study design (experimental vs. correlational) may also affect trustworthiness and newsworthiness.
期刊介绍:
In 2021, Advances in Methods and Practices in Psychological Science will undergo a transition to become an open access journal. This journal focuses on publishing innovative developments in research methods, practices, and conduct within the field of psychological science. It embraces a wide range of areas and topics and encourages the integration of methodological and analytical questions.
The aim of AMPPS is to bring the latest methodological advances to researchers from various disciplines, even those who are not methodological experts. Therefore, the journal seeks submissions that are accessible to readers with different research interests and that represent the diverse research trends within the field of psychological science.
The types of content that AMPPS welcomes include articles that communicate advancements in methods, practices, and metascience, as well as empirical scientific best practices. Additionally, tutorials, commentaries, and simulation studies on new techniques and research tools are encouraged. The journal also aims to publish papers that bring advances from specialized subfields to a broader audience. Lastly, AMPPS accepts Registered Replication Reports, which focus on replicating important findings from previously published studies.
Overall, the transition of Advances in Methods and Practices in Psychological Science to an open access journal aims to increase accessibility and promote the dissemination of new developments in research methods and practices within the field of psychological science.