{"title":"发现机器人:调查用户对社交机器人的检测线索以及他们验证推特个人资料的意愿","authors":"Thao Ngo , Magdalena Wischnewski , Rebecca Bernemann , Martin Jansen , Nicole Krämer","doi":"10.1016/j.chb.2023.107819","DOIUrl":null,"url":null,"abstract":"<div><p>Detecting social bots is important for users to assess the credibility and trustworthiness of information on social media. In this work, we therefore investigate how users become suspicious of social bots and users' willingness to verify Twitter profiles. Focusing on political social bots, we first explored which cues users apply to detect social bots in a qualitative online study (<em>N</em> = 30). Content analysis revealed three cue categories: content and form, behavior, profile characteristics. In a subsequent online experiment (<em>N</em> = 221), we examined which cues evoke users’ willingness to verify profiles. Extending prior literature on partisan-motivated reasoning, we further investigated the effects of <em>type of profile</em> (bot, ambiguous, human) and <em>opinion-congruency</em>, i.e., whether a profile shares the same opinion or not, on the willingness to verify a Twitter profile. Our analysis showed that homogeneity in behavior and content and form was most important to users. Confirming our hypothesis, participants were more willing to verify opinion-incongruent profiles than congruent ones. Bot profiles were most likely to be verified. Our main conclusion is that users apply profile verification tools to confirm their perception of a social media profile instead of alleviating their uncertainties about it. Partisan-motivated reasoning drives profile verification for bot and human profiles.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"146 ","pages":"Article 107819"},"PeriodicalIF":9.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spot the bot: Investigating user's detection cues for social bots and their willingness to verify Twitter profiles\",\"authors\":\"Thao Ngo , Magdalena Wischnewski , Rebecca Bernemann , Martin Jansen , Nicole Krämer\",\"doi\":\"10.1016/j.chb.2023.107819\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Detecting social bots is important for users to assess the credibility and trustworthiness of information on social media. In this work, we therefore investigate how users become suspicious of social bots and users' willingness to verify Twitter profiles. Focusing on political social bots, we first explored which cues users apply to detect social bots in a qualitative online study (<em>N</em> = 30). Content analysis revealed three cue categories: content and form, behavior, profile characteristics. In a subsequent online experiment (<em>N</em> = 221), we examined which cues evoke users’ willingness to verify profiles. Extending prior literature on partisan-motivated reasoning, we further investigated the effects of <em>type of profile</em> (bot, ambiguous, human) and <em>opinion-congruency</em>, i.e., whether a profile shares the same opinion or not, on the willingness to verify a Twitter profile. Our analysis showed that homogeneity in behavior and content and form was most important to users. Confirming our hypothesis, participants were more willing to verify opinion-incongruent profiles than congruent ones. Bot profiles were most likely to be verified. Our main conclusion is that users apply profile verification tools to confirm their perception of a social media profile instead of alleviating their uncertainties about it. Partisan-motivated reasoning drives profile verification for bot and human profiles.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":\"146 \",\"pages\":\"Article 107819\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S074756322300170X\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322300170X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Spot the bot: Investigating user's detection cues for social bots and their willingness to verify Twitter profiles
Detecting social bots is important for users to assess the credibility and trustworthiness of information on social media. In this work, we therefore investigate how users become suspicious of social bots and users' willingness to verify Twitter profiles. Focusing on political social bots, we first explored which cues users apply to detect social bots in a qualitative online study (N = 30). Content analysis revealed three cue categories: content and form, behavior, profile characteristics. In a subsequent online experiment (N = 221), we examined which cues evoke users’ willingness to verify profiles. Extending prior literature on partisan-motivated reasoning, we further investigated the effects of type of profile (bot, ambiguous, human) and opinion-congruency, i.e., whether a profile shares the same opinion or not, on the willingness to verify a Twitter profile. Our analysis showed that homogeneity in behavior and content and form was most important to users. Confirming our hypothesis, participants were more willing to verify opinion-incongruent profiles than congruent ones. Bot profiles were most likely to be verified. Our main conclusion is that users apply profile verification tools to confirm their perception of a social media profile instead of alleviating their uncertainties about it. Partisan-motivated reasoning drives profile verification for bot and human profiles.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.