通过援助和培训提高社交机器人的检测能力。

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES
Human Factors Pub Date : 2024-10-01 Epub Date: 2023-11-14 DOI:10.1177/00187208231210145
Ryan Kenny, Baruch Fischhoff, Alex Davis, Casey Canfield
{"title":"通过援助和培训提高社交机器人的检测能力。","authors":"Ryan Kenny, Baruch Fischhoff, Alex Davis, Casey Canfield","doi":"10.1177/00187208231210145","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning.</p><p><strong>Background: </strong>Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids.</p><p><strong>Method: </strong>Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots.</p><p><strong>Results: </strong>The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it.</p><p><strong>Conclusions: </strong>Informative interventions improved social bot detection; warning alone did not.</p><p><strong>Application: </strong>We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11382440/pdf/","citationCount":"0","resultStr":"{\"title\":\"Improving Social Bot Detection Through Aid and Training.\",\"authors\":\"Ryan Kenny, Baruch Fischhoff, Alex Davis, Casey Canfield\",\"doi\":\"10.1177/00187208231210145\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning.</p><p><strong>Background: </strong>Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids.</p><p><strong>Method: </strong>Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots.</p><p><strong>Results: </strong>The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it.</p><p><strong>Conclusions: </strong>Informative interventions improved social bot detection; warning alone did not.</p><p><strong>Application: </strong>We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11382440/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208231210145\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/11/14 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231210145","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/11/14 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

目的:我们测试了三种辅助工具对个人在Twitter角色中检测社交机器人的能力的影响:机器人指标得分、培训视频和警告。背景:检测社交机器人可以防止网络欺骗。我们使用一个模拟的社交媒体任务来评估三种辅助工具。方法:外行参与者在模拟的在线环境中判断60个Twitter角色是人类还是社交机器人,使用三种机器学习算法之间的一致性来估计每个角色是机器人的概率。实验1比较了对照组和两个干预组,其中一个组为每条推文提供机器人指标评分;另一个提供了关于社交机器人的警告。实验二比较了控制组和两个干预组,一个组收到机器人指标分数,另一个组收到训练视频,重点是识别社交机器人的启发式方法。结果:bot指标评分干预提高了两个实验的预测性能,减少了过度自信。培训视频也很有效,尽管效果不那么明显。这一警告没有起到任何作用。参与者很少表示愿意分享他们标记为机器人的角色的内容,即使他们同意这一点。结论:信息性干预提高了社交机器人的检测;仅靠警告是不够的。应用:我们提供了一个实验测试平台和方法,可用于评估和改进旨在减少社交机器人脆弱性的干预措施。我们展示了两种干预措施的价值,它们可以应用于许多环境。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving Social Bot Detection Through Aid and Training.

Objective: We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning.

Background: Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids.

Method: Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots.

Results: The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it.

Conclusions: Informative interventions improved social bot detection; warning alone did not.

Application: We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信