{"title":"“数字个性化效应”(DPE):对个性化内容增加在线操纵影响的可能程度的量化","authors":"Robert Epstein, Amanda Newland, Li Yu Tang","doi":"10.1016/j.chb.2025.108578","DOIUrl":null,"url":null,"abstract":"<div><div>In recent published reports on the “search engine manipulation effect,” the “targeted messaging effect,” and the “answer bot effect,” exposure to biased content produced significant shifts in the opinions and voting preferences of undecided voters. In the present study, these effects were replicated on simulations of the Google search engine, X (f.k.a., Twitter), and Alexa, and biased content was also personalized. Participants (all from the US) were first asked to rank order news and other sources according to how much they preferred them. Then they were randomly assigned either to a group in which content about the 2019 Australian election for Prime Minister would be received from highly preferred or least preferred sources. Participants were also randomly assigned either to a group in which the content was highly biased to favor Candidate-A or to favor Candidate-B. In all three experiments, the voting preferences of participants who saw biased content – that is, content that favored Candidate A or Candidate B – apparently coming from least preferred sources shifted by relatively small amounts (17.1% in Experiment 1, 21.8% in Experiment 2, and 39.3% in Experiment 3); whereas, the voting preferences of participants who saw that same biased content apparently coming from highly preferred sources shifted by significantly and substantially larger amounts (67.7% in Experiment 1, 71.9% in Experiment 2, and 65.9% in Experiment 3). All shifts occurred in the direction of the candidate favored by the bias. We conclude that personalization of biased content can greatly increase the impact of such content.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"166 ","pages":"Article 108578"},"PeriodicalIF":9.0000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The “digital personalization effect” (DPE): A quantification of the possible extent to which personalizing content can increase the impact of online manipulations\",\"authors\":\"Robert Epstein, Amanda Newland, Li Yu Tang\",\"doi\":\"10.1016/j.chb.2025.108578\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent published reports on the “search engine manipulation effect,” the “targeted messaging effect,” and the “answer bot effect,” exposure to biased content produced significant shifts in the opinions and voting preferences of undecided voters. In the present study, these effects were replicated on simulations of the Google search engine, X (f.k.a., Twitter), and Alexa, and biased content was also personalized. Participants (all from the US) were first asked to rank order news and other sources according to how much they preferred them. Then they were randomly assigned either to a group in which content about the 2019 Australian election for Prime Minister would be received from highly preferred or least preferred sources. Participants were also randomly assigned either to a group in which the content was highly biased to favor Candidate-A or to favor Candidate-B. In all three experiments, the voting preferences of participants who saw biased content – that is, content that favored Candidate A or Candidate B – apparently coming from least preferred sources shifted by relatively small amounts (17.1% in Experiment 1, 21.8% in Experiment 2, and 39.3% in Experiment 3); whereas, the voting preferences of participants who saw that same biased content apparently coming from highly preferred sources shifted by significantly and substantially larger amounts (67.7% in Experiment 1, 71.9% in Experiment 2, and 65.9% in Experiment 3). All shifts occurred in the direction of the candidate favored by the bias. We conclude that personalization of biased content can greatly increase the impact of such content.</div></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":\"166 \",\"pages\":\"Article 108578\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2025-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0747563225000251\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563225000251","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
The “digital personalization effect” (DPE): A quantification of the possible extent to which personalizing content can increase the impact of online manipulations
In recent published reports on the “search engine manipulation effect,” the “targeted messaging effect,” and the “answer bot effect,” exposure to biased content produced significant shifts in the opinions and voting preferences of undecided voters. In the present study, these effects were replicated on simulations of the Google search engine, X (f.k.a., Twitter), and Alexa, and biased content was also personalized. Participants (all from the US) were first asked to rank order news and other sources according to how much they preferred them. Then they were randomly assigned either to a group in which content about the 2019 Australian election for Prime Minister would be received from highly preferred or least preferred sources. Participants were also randomly assigned either to a group in which the content was highly biased to favor Candidate-A or to favor Candidate-B. In all three experiments, the voting preferences of participants who saw biased content – that is, content that favored Candidate A or Candidate B – apparently coming from least preferred sources shifted by relatively small amounts (17.1% in Experiment 1, 21.8% in Experiment 2, and 39.3% in Experiment 3); whereas, the voting preferences of participants who saw that same biased content apparently coming from highly preferred sources shifted by significantly and substantially larger amounts (67.7% in Experiment 1, 71.9% in Experiment 2, and 65.9% in Experiment 3). All shifts occurred in the direction of the candidate favored by the bias. We conclude that personalization of biased content can greatly increase the impact of such content.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.