Allison J. Lazard , Tara Licciardello Queen , Marlyn Pulido , Shelby Lake , Sydney Nicolla , Hung-Jui Tan , Marjory Charlot , Andrew B. Smitherman , Nabarun Dasgupta
{"title":"Social media prompts to encourage intervening with cancer treatment misinformation","authors":"Allison J. Lazard , Tara Licciardello Queen , Marlyn Pulido , Shelby Lake , Sydney Nicolla , Hung-Jui Tan , Marjory Charlot , Andrew B. Smitherman , Nabarun Dasgupta","doi":"10.1016/j.socscimed.2025.117950","DOIUrl":null,"url":null,"abstract":"<div><div>Misinformation about false and potentially harmful cancer treatments and cures are shared widely on social media. Strategies to encourage the cancer community to prosocially intervene, by flagging and reporting false posts, are needed to reduce cancer treatment misinformation. Automated prompts encouraging flagging of misinformation are a promising approach to increase intervening. Prompts may be more effective with social cues for others’ actions and clear platform policies. We examined whether prompts alone (referred to as standard prompts) or social cue prompts with a policy for removing posts would lead to more intervening, less sharing, and impact cognitive predictors of the Bystander Intervention Model (e.g., responsibility). We recruited U.S. adults in cancer networks for a within-persons, longitudinal experiment (Time 1–4). We randomized the viewing order of 1) standard prompts or 2) social cue prompts and policy, switching conditions at Time 3. Prompts encouraged intervening (flagging) without leading to other unintended actions. Participants more frequently flagged misinformation (prompted, 24–33 %) than disliking (unprompted, 3–12 %) or liking (unintended, 4–35 %) on the simulated feed. Initially (Time 1–2), social cue prompts (vs. standard) encouraged more willingness to intervene and perceived responsibility, <em>p</em> = .01-0.03; however, there were no differences after (Time 3–4), potentially due to carryover effects. Prompts (also called warnings, nudges, or labels) alerting viewers of cancer treatment misinformation is a promising approach to encourage intervening (flagging). Prompts can be enhanced with social cues (i.e., counts of others who flagged) and clear platform policies to encourage the cancer community to reduce misinformation on social media.</div></div>","PeriodicalId":49122,"journal":{"name":"Social Science & Medicine","volume":"372 ","pages":"Article 117950"},"PeriodicalIF":4.9000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Science & Medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0277953625002795","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH","Score":null,"Total":0}
引用次数: 0
Abstract
Misinformation about false and potentially harmful cancer treatments and cures are shared widely on social media. Strategies to encourage the cancer community to prosocially intervene, by flagging and reporting false posts, are needed to reduce cancer treatment misinformation. Automated prompts encouraging flagging of misinformation are a promising approach to increase intervening. Prompts may be more effective with social cues for others’ actions and clear platform policies. We examined whether prompts alone (referred to as standard prompts) or social cue prompts with a policy for removing posts would lead to more intervening, less sharing, and impact cognitive predictors of the Bystander Intervention Model (e.g., responsibility). We recruited U.S. adults in cancer networks for a within-persons, longitudinal experiment (Time 1–4). We randomized the viewing order of 1) standard prompts or 2) social cue prompts and policy, switching conditions at Time 3. Prompts encouraged intervening (flagging) without leading to other unintended actions. Participants more frequently flagged misinformation (prompted, 24–33 %) than disliking (unprompted, 3–12 %) or liking (unintended, 4–35 %) on the simulated feed. Initially (Time 1–2), social cue prompts (vs. standard) encouraged more willingness to intervene and perceived responsibility, p = .01-0.03; however, there were no differences after (Time 3–4), potentially due to carryover effects. Prompts (also called warnings, nudges, or labels) alerting viewers of cancer treatment misinformation is a promising approach to encourage intervening (flagging). Prompts can be enhanced with social cues (i.e., counts of others who flagged) and clear platform policies to encourage the cancer community to reduce misinformation on social media.
期刊介绍:
Social Science & Medicine provides an international and interdisciplinary forum for the dissemination of social science research on health. We publish original research articles (both empirical and theoretical), reviews, position papers and commentaries on health issues, to inform current research, policy and practice in all areas of common interest to social scientists, health practitioners, and policy makers. The journal publishes material relevant to any aspect of health from a wide range of social science disciplines (anthropology, economics, epidemiology, geography, policy, psychology, and sociology), and material relevant to the social sciences from any of the professions concerned with physical and mental health, health care, clinical practice, and health policy and organization. We encourage material which is of general interest to an international readership.