{"title":"我能相信这份报纸吗?","authors":"Andrey Anikin","doi":"10.3758/s13423-025-02740-3","DOIUrl":null,"url":null,"abstract":"<p><p>After a decade of data falsification scandals and replication failures in psychology and related empirical disciplines, there are urgent calls for open science and structural reform in the publishing industry. In the meantime, however, researchers need to learn how to recognize tell-tale signs of methodological and conceptual shortcomings that make a published claim suspect. I review four key problems and propose simple ways to detect them. First, the study may be fake; if in doubt, inspect the authors' and journal's profiles and request to see the raw data to check for inconsistencies. Second, there may be too little data; low precision of effect sizes is a clear warning sign of this. Third, the data may not be analyzed correctly; excessive flexibility in data analysis can be deduced from signs of data dredging and convoluted post hoc theorizing in the text, while violations of model assumptions can be detected by examining plots of observed data and model predictions. Fourth, the conclusions may not be justified by the data; common issues are inappropriate acceptance of the null hypothesis, biased meta-analyses, over-generalization over unmodeled variance, hidden confounds, and unspecific theoretical predictions. The main takeaways are to verify that the methodology is robust and to distinguish between what the actual results are and what the authors claim these results mean when citing empirical work. Critical evaluation of published evidence is an essential skill to develop as it can prevent researchers from pursuing unproductive avenues and ensure better trustworthiness of science as a whole.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can I trust this paper?\",\"authors\":\"Andrey Anikin\",\"doi\":\"10.3758/s13423-025-02740-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>After a decade of data falsification scandals and replication failures in psychology and related empirical disciplines, there are urgent calls for open science and structural reform in the publishing industry. In the meantime, however, researchers need to learn how to recognize tell-tale signs of methodological and conceptual shortcomings that make a published claim suspect. I review four key problems and propose simple ways to detect them. First, the study may be fake; if in doubt, inspect the authors' and journal's profiles and request to see the raw data to check for inconsistencies. Second, there may be too little data; low precision of effect sizes is a clear warning sign of this. Third, the data may not be analyzed correctly; excessive flexibility in data analysis can be deduced from signs of data dredging and convoluted post hoc theorizing in the text, while violations of model assumptions can be detected by examining plots of observed data and model predictions. Fourth, the conclusions may not be justified by the data; common issues are inappropriate acceptance of the null hypothesis, biased meta-analyses, over-generalization over unmodeled variance, hidden confounds, and unspecific theoretical predictions. The main takeaways are to verify that the methodology is robust and to distinguish between what the actual results are and what the authors claim these results mean when citing empirical work. Critical evaluation of published evidence is an essential skill to develop as it can prevent researchers from pursuing unproductive avenues and ensure better trustworthiness of science as a whole.</p>\",\"PeriodicalId\":20763,\"journal\":{\"name\":\"Psychonomic Bulletin & Review\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychonomic Bulletin & Review\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3758/s13423-025-02740-3\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychonomic Bulletin & Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13423-025-02740-3","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
After a decade of data falsification scandals and replication failures in psychology and related empirical disciplines, there are urgent calls for open science and structural reform in the publishing industry. In the meantime, however, researchers need to learn how to recognize tell-tale signs of methodological and conceptual shortcomings that make a published claim suspect. I review four key problems and propose simple ways to detect them. First, the study may be fake; if in doubt, inspect the authors' and journal's profiles and request to see the raw data to check for inconsistencies. Second, there may be too little data; low precision of effect sizes is a clear warning sign of this. Third, the data may not be analyzed correctly; excessive flexibility in data analysis can be deduced from signs of data dredging and convoluted post hoc theorizing in the text, while violations of model assumptions can be detected by examining plots of observed data and model predictions. Fourth, the conclusions may not be justified by the data; common issues are inappropriate acceptance of the null hypothesis, biased meta-analyses, over-generalization over unmodeled variance, hidden confounds, and unspecific theoretical predictions. The main takeaways are to verify that the methodology is robust and to distinguish between what the actual results are and what the authors claim these results mean when citing empirical work. Critical evaluation of published evidence is an essential skill to develop as it can prevent researchers from pursuing unproductive avenues and ensure better trustworthiness of science as a whole.
期刊介绍:
The journal provides coverage spanning a broad spectrum of topics in all areas of experimental psychology. The journal is primarily dedicated to the publication of theory and review articles and brief reports of outstanding experimental work. Areas of coverage include cognitive psychology broadly construed, including but not limited to action, perception, & attention, language, learning & memory, reasoning & decision making, and social cognition. We welcome submissions that approach these issues from a variety of perspectives such as behavioral measurements, comparative psychology, development, evolutionary psychology, genetics, neuroscience, and quantitative/computational modeling. We particularly encourage integrative research that crosses traditional content and methodological boundaries.