{"title":"赌博研究结果报告和解释中的潜在偏见来源","authors":"P. Delfabbro, Daniel L. King, A. Blaszczynski","doi":"10.4309/jgi.2020.45.10","DOIUrl":null,"url":null,"abstract":"Over the last decade, increasing attention has been directed to specific problems confronting the social sciences. These concerns have included not only well-documented difficulties in replicating major research findings (Open Science Collaboration, 2015), but also problems regarding the nature of the scientific process itself (Chambers, 2017). A number of these concerns have been articulated by Chambers (2017) in his book The Seven Deadly Sins of Psychology. This book was written not only to highlight the potential causes of the ‘‘replication crisis,’’ but also to call attention to important sources of bias and unreliability in social science research. Chambers provided a detailed account of the numerous ways in which the validity and reliability of research can be compromised. Certain of these ‘‘sins’’ were generally self-evident, and included fraud (e.g., the fabrication of data) and the withholding of data from independent scrutiny. Other practices, however, were more subtle. Examples here included the practice of massing or ‘‘data tuning’’ until it yields the results required; ‘‘HARKing,’’ in which the study’s hypotheses are reframed after the results are known; and various forms of ‘‘p-hacking,’’ in which data are analysed or collected to ensure statistical significance. Common examples of ‘‘p-hacking,’’ Chambers observed, included the selective addition of cases to a sample to obtain significance; selective non-statistically-justified removal of cases to increase effects; and the use of multiple analytical test strategies until one yields significance.","PeriodicalId":45414,"journal":{"name":"Journal of Gambling Issues","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Potential sources of bias in the reporting and interpretation of gambling research findings\",\"authors\":\"P. Delfabbro, Daniel L. King, A. Blaszczynski\",\"doi\":\"10.4309/jgi.2020.45.10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the last decade, increasing attention has been directed to specific problems confronting the social sciences. These concerns have included not only well-documented difficulties in replicating major research findings (Open Science Collaboration, 2015), but also problems regarding the nature of the scientific process itself (Chambers, 2017). A number of these concerns have been articulated by Chambers (2017) in his book The Seven Deadly Sins of Psychology. This book was written not only to highlight the potential causes of the ‘‘replication crisis,’’ but also to call attention to important sources of bias and unreliability in social science research. Chambers provided a detailed account of the numerous ways in which the validity and reliability of research can be compromised. Certain of these ‘‘sins’’ were generally self-evident, and included fraud (e.g., the fabrication of data) and the withholding of data from independent scrutiny. Other practices, however, were more subtle. Examples here included the practice of massing or ‘‘data tuning’’ until it yields the results required; ‘‘HARKing,’’ in which the study’s hypotheses are reframed after the results are known; and various forms of ‘‘p-hacking,’’ in which data are analysed or collected to ensure statistical significance. Common examples of ‘‘p-hacking,’’ Chambers observed, included the selective addition of cases to a sample to obtain significance; selective non-statistically-justified removal of cases to increase effects; and the use of multiple analytical test strategies until one yields significance.\",\"PeriodicalId\":45414,\"journal\":{\"name\":\"Journal of Gambling Issues\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2020-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Gambling Issues\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4309/jgi.2020.45.10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"SUBSTANCE ABUSE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Gambling Issues","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4309/jgi.2020.45.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"SUBSTANCE ABUSE","Score":null,"Total":0}
Potential sources of bias in the reporting and interpretation of gambling research findings
Over the last decade, increasing attention has been directed to specific problems confronting the social sciences. These concerns have included not only well-documented difficulties in replicating major research findings (Open Science Collaboration, 2015), but also problems regarding the nature of the scientific process itself (Chambers, 2017). A number of these concerns have been articulated by Chambers (2017) in his book The Seven Deadly Sins of Psychology. This book was written not only to highlight the potential causes of the ‘‘replication crisis,’’ but also to call attention to important sources of bias and unreliability in social science research. Chambers provided a detailed account of the numerous ways in which the validity and reliability of research can be compromised. Certain of these ‘‘sins’’ were generally self-evident, and included fraud (e.g., the fabrication of data) and the withholding of data from independent scrutiny. Other practices, however, were more subtle. Examples here included the practice of massing or ‘‘data tuning’’ until it yields the results required; ‘‘HARKing,’’ in which the study’s hypotheses are reframed after the results are known; and various forms of ‘‘p-hacking,’’ in which data are analysed or collected to ensure statistical significance. Common examples of ‘‘p-hacking,’’ Chambers observed, included the selective addition of cases to a sample to obtain significance; selective non-statistically-justified removal of cases to increase effects; and the use of multiple analytical test strategies until one yields significance.