Melissa Simone, Cory J Cascalheira, Benjamin G Pierce
{"title":"一项准实验研究,探讨多模式僵尸筛查工具的功效,以及在在线心理研究中维护数据完整性的建议。","authors":"Melissa Simone, Cory J Cascalheira, Benjamin G Pierce","doi":"10.1037/amp0001183","DOIUrl":null,"url":null,"abstract":"<p><p>Bots are automated software programs that pose an ongoing threat to psychological research by invading online research studies and their increasing sophistication over time. Despite this growing concern, research in this area has been limited to bot detection in existing data sets following an unexpected encounter with bots. The present three-condition, quasi-experimental study aimed to address this gap in the literature by examining the efficacy of three types of bot screening tools across three incentive conditions ($0, $1, and $5). Data were collected from 444 respondents via Twitter advertisements between July and September 2021. The efficacy of five <i>task-based</i> (i.e., anagrams, visual search), <i>question-based</i> (i.e., attention checks, ReCAPTCHA), and <i>data-based</i> (i.e., consistency, metadata) tools was examined with Bonferroni-adjusted univariate and multivariate logistic regression analyses. In general, study results suggest that bot screening tools function similarly for participants recruited across incentive conditions. Moreover, the present analyses revealed heterogeneity in the efficacy of bot screening tool subtypes. Notably, the present results suggest that the least effective bot screening tools were among the most commonly used tools in existing literature (e.g., ReCAPTCHA). In sum, the study findings revealed highly effective and highly ineffective bot screening tools. Study design and data integrity recommendations for researchers are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":12,"journal":{"name":"ACS Chemical Health & Safety","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10799166/pdf/","citationCount":"0","resultStr":"{\"title\":\"A quasi-experimental study examining the efficacy of multimodal bot screening tools and recommendations to preserve data integrity in online psychological research.\",\"authors\":\"Melissa Simone, Cory J Cascalheira, Benjamin G Pierce\",\"doi\":\"10.1037/amp0001183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Bots are automated software programs that pose an ongoing threat to psychological research by invading online research studies and their increasing sophistication over time. Despite this growing concern, research in this area has been limited to bot detection in existing data sets following an unexpected encounter with bots. The present three-condition, quasi-experimental study aimed to address this gap in the literature by examining the efficacy of three types of bot screening tools across three incentive conditions ($0, $1, and $5). Data were collected from 444 respondents via Twitter advertisements between July and September 2021. The efficacy of five <i>task-based</i> (i.e., anagrams, visual search), <i>question-based</i> (i.e., attention checks, ReCAPTCHA), and <i>data-based</i> (i.e., consistency, metadata) tools was examined with Bonferroni-adjusted univariate and multivariate logistic regression analyses. In general, study results suggest that bot screening tools function similarly for participants recruited across incentive conditions. Moreover, the present analyses revealed heterogeneity in the efficacy of bot screening tool subtypes. Notably, the present results suggest that the least effective bot screening tools were among the most commonly used tools in existing literature (e.g., ReCAPTCHA). In sum, the study findings revealed highly effective and highly ineffective bot screening tools. Study design and data integrity recommendations for researchers are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>\",\"PeriodicalId\":12,\"journal\":{\"name\":\"ACS Chemical Health & Safety\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10799166/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Chemical Health & Safety\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/amp0001183\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/7/20 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Chemical Health & Safety","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/amp0001183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/7/20 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH","Score":null,"Total":0}
A quasi-experimental study examining the efficacy of multimodal bot screening tools and recommendations to preserve data integrity in online psychological research.
Bots are automated software programs that pose an ongoing threat to psychological research by invading online research studies and their increasing sophistication over time. Despite this growing concern, research in this area has been limited to bot detection in existing data sets following an unexpected encounter with bots. The present three-condition, quasi-experimental study aimed to address this gap in the literature by examining the efficacy of three types of bot screening tools across three incentive conditions ($0, $1, and $5). Data were collected from 444 respondents via Twitter advertisements between July and September 2021. The efficacy of five task-based (i.e., anagrams, visual search), question-based (i.e., attention checks, ReCAPTCHA), and data-based (i.e., consistency, metadata) tools was examined with Bonferroni-adjusted univariate and multivariate logistic regression analyses. In general, study results suggest that bot screening tools function similarly for participants recruited across incentive conditions. Moreover, the present analyses revealed heterogeneity in the efficacy of bot screening tool subtypes. Notably, the present results suggest that the least effective bot screening tools were among the most commonly used tools in existing literature (e.g., ReCAPTCHA). In sum, the study findings revealed highly effective and highly ineffective bot screening tools. Study design and data integrity recommendations for researchers are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
The Journal of Chemical Health and Safety focuses on news, information, and ideas relating to issues and advances in chemical health and safety. The Journal of Chemical Health and Safety covers up-to-the minute, in-depth views of safety issues ranging from OSHA and EPA regulations to the safe handling of hazardous waste, from the latest innovations in effective chemical hygiene practices to the courts'' most recent rulings on safety-related lawsuits. The Journal of Chemical Health and Safety presents real-world information that health, safety and environmental professionals and others responsible for the safety of their workplaces can put to use right away, identifying potential and developing safety concerns before they do real harm.