A. Russell, M. Browne, N. Hing, M. Rockloff, P. Newall
{"title":"样本是否具有代表性或公正性?回复Pickering和Blaszczynski","authors":"A. Russell, M. Browne, N. Hing, M. Rockloff, P. Newall","doi":"10.1080/14459795.2021.1973535","DOIUrl":null,"url":null,"abstract":"ABSTRACT Pickering and Blaszczynski’s paper (2021) claims that the problem gambling rate is inflated in paid online convenience and crowdsourced samples. However, there is a methodological flaw in their findings: they combined problem gambling rates from samples that are specific by design (e.g. at-least monthly sports bettors), and compared them to a problem gambling prevalence estimate from the general population. Pickering and Blaszczynski conflate three constructs: representativeness, bias and data quality. Data quality can be optimized through protections and checks, but these do not necessarily make samples more representative, or less biased. Many of the biases present in paid online convenience samples (e.g. self-selection biases) also apply to the gold standard of random digit dial telephone surveys, which is manifestly evident in very low response rates. These biases are also present in industry-recruited and venue-recruited samples, as well as samples of university students and treatment-seeking clients. Paid online convenience samples also have clear benefits. For example, it is possible to obtain large samples of very specific subgroups. Online surveys may reduce bias associated with self-reporting potentially stigmatizing conditions, like problem gambling. It is important not to discount research simply because it uses a paid online convenience or crowdsourced sample.","PeriodicalId":47301,"journal":{"name":"International Gambling Studies","volume":"22 1","pages":"102 - 113"},"PeriodicalIF":2.5000,"publicationDate":"2021-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Are any samples representative or unbiased? reply to Pickering and Blaszczynski\",\"authors\":\"A. Russell, M. Browne, N. Hing, M. Rockloff, P. Newall\",\"doi\":\"10.1080/14459795.2021.1973535\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Pickering and Blaszczynski’s paper (2021) claims that the problem gambling rate is inflated in paid online convenience and crowdsourced samples. However, there is a methodological flaw in their findings: they combined problem gambling rates from samples that are specific by design (e.g. at-least monthly sports bettors), and compared them to a problem gambling prevalence estimate from the general population. Pickering and Blaszczynski conflate three constructs: representativeness, bias and data quality. Data quality can be optimized through protections and checks, but these do not necessarily make samples more representative, or less biased. Many of the biases present in paid online convenience samples (e.g. self-selection biases) also apply to the gold standard of random digit dial telephone surveys, which is manifestly evident in very low response rates. These biases are also present in industry-recruited and venue-recruited samples, as well as samples of university students and treatment-seeking clients. Paid online convenience samples also have clear benefits. For example, it is possible to obtain large samples of very specific subgroups. Online surveys may reduce bias associated with self-reporting potentially stigmatizing conditions, like problem gambling. It is important not to discount research simply because it uses a paid online convenience or crowdsourced sample.\",\"PeriodicalId\":47301,\"journal\":{\"name\":\"International Gambling Studies\",\"volume\":\"22 1\",\"pages\":\"102 - 113\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2021-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Gambling Studies\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1080/14459795.2021.1973535\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"SUBSTANCE ABUSE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Gambling Studies","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/14459795.2021.1973535","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SUBSTANCE ABUSE","Score":null,"Total":0}
Are any samples representative or unbiased? reply to Pickering and Blaszczynski
ABSTRACT Pickering and Blaszczynski’s paper (2021) claims that the problem gambling rate is inflated in paid online convenience and crowdsourced samples. However, there is a methodological flaw in their findings: they combined problem gambling rates from samples that are specific by design (e.g. at-least monthly sports bettors), and compared them to a problem gambling prevalence estimate from the general population. Pickering and Blaszczynski conflate three constructs: representativeness, bias and data quality. Data quality can be optimized through protections and checks, but these do not necessarily make samples more representative, or less biased. Many of the biases present in paid online convenience samples (e.g. self-selection biases) also apply to the gold standard of random digit dial telephone surveys, which is manifestly evident in very low response rates. These biases are also present in industry-recruited and venue-recruited samples, as well as samples of university students and treatment-seeking clients. Paid online convenience samples also have clear benefits. For example, it is possible to obtain large samples of very specific subgroups. Online surveys may reduce bias associated with self-reporting potentially stigmatizing conditions, like problem gambling. It is important not to discount research simply because it uses a paid online convenience or crowdsourced sample.