A quasi-experimental study examining the efficacy of multimodal bot screening tools and recommendations to preserve data integrity in online psychological research.
Melissa Simone, Cory J Cascalheira, Benjamin G Pierce
{"title":"A quasi-experimental study examining the efficacy of multimodal bot screening tools and recommendations to preserve data integrity in online psychological research.","authors":"Melissa Simone, Cory J Cascalheira, Benjamin G Pierce","doi":"10.1037/amp0001183","DOIUrl":null,"url":null,"abstract":"<p><p>Bots are automated software programs that pose an ongoing threat to psychological research by invading online research studies and their increasing sophistication over time. Despite this growing concern, research in this area has been limited to bot detection in existing data sets following an unexpected encounter with bots. The present three-condition, quasi-experimental study aimed to address this gap in the literature by examining the efficacy of three types of bot screening tools across three incentive conditions ($0, $1, and $5). Data were collected from 444 respondents via Twitter advertisements between July and September 2021. The efficacy of five <i>task-based</i> (i.e., anagrams, visual search), <i>question-based</i> (i.e., attention checks, ReCAPTCHA), and <i>data-based</i> (i.e., consistency, metadata) tools was examined with Bonferroni-adjusted univariate and multivariate logistic regression analyses. In general, study results suggest that bot screening tools function similarly for participants recruited across incentive conditions. Moreover, the present analyses revealed heterogeneity in the efficacy of bot screening tool subtypes. Notably, the present results suggest that the least effective bot screening tools were among the most commonly used tools in existing literature (e.g., ReCAPTCHA). In sum, the study findings revealed highly effective and highly ineffective bot screening tools. Study design and data integrity recommendations for researchers are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":48468,"journal":{"name":"American Psychologist","volume":" ","pages":"956-969"},"PeriodicalIF":12.3000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10799166/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Psychologist","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/amp0001183","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/7/20 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Bots are automated software programs that pose an ongoing threat to psychological research by invading online research studies and their increasing sophistication over time. Despite this growing concern, research in this area has been limited to bot detection in existing data sets following an unexpected encounter with bots. The present three-condition, quasi-experimental study aimed to address this gap in the literature by examining the efficacy of three types of bot screening tools across three incentive conditions ($0, $1, and $5). Data were collected from 444 respondents via Twitter advertisements between July and September 2021. The efficacy of five task-based (i.e., anagrams, visual search), question-based (i.e., attention checks, ReCAPTCHA), and data-based (i.e., consistency, metadata) tools was examined with Bonferroni-adjusted univariate and multivariate logistic regression analyses. In general, study results suggest that bot screening tools function similarly for participants recruited across incentive conditions. Moreover, the present analyses revealed heterogeneity in the efficacy of bot screening tool subtypes. Notably, the present results suggest that the least effective bot screening tools were among the most commonly used tools in existing literature (e.g., ReCAPTCHA). In sum, the study findings revealed highly effective and highly ineffective bot screening tools. Study design and data integrity recommendations for researchers are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
Established in 1946, American Psychologist® is the flagship peer-reviewed scholarly journal of the American Psychological Association. It publishes high-impact papers of broad interest, including empirical reports, meta-analyses, and scholarly reviews, covering psychological science, practice, education, and policy. Articles often address issues of national and international significance within the field of psychology and its relationship to society. Published in an accessible style, contributions in American Psychologist are designed to be understood by both psychologists and the general public.