看不见的政治官员:个性化算法如何塑造公众舆论

IF 2.9 1区 哲学 Q1 ETHICS
K. Toloknev
{"title":"看不见的政治官员:个性化算法如何塑造公众舆论","authors":"K. Toloknev","doi":"10.30570/2078-5089-2022-107-4-63-82","DOIUrl":null,"url":null,"abstract":"Social media have been firmly entrenched in the modern everyday life. Still, their influence on the formation of public opinion is not well understood. An important feature of social media is that they are not neutral. Not only do people interact with each other on social media platforms, but social media themselves actively interact with people, selecting personalized content for them based on the information about their interests and behavior. In 2011, Eli Pariser hypothesized that content personalization should lead to the formation of a kind of “information cocoons”, or “filter bubbles” — homogeneous groups of users who hold similar views. However, the fragmentation of the Internet community into “filter bubbles” is not the only threat posed by the use of personalization algorithms. Even more dangerously, social media possess the ability to manipulate content selection algorithms in order to influence users’ views. The article attempts to test the reality of these threats through computational modeling. To solve this task, the author employs a simple agent-based model that simulates the impact of personalization algorithms on communication in social media. The article demonstrates that, contrary to Pariser’s hypothesis, algorithms that select content as close as possible to user preferences result in the emergence of “filter bubbles” rather rarely. The author also finds that manipulation of personalization algorithms makes it possible to influence the formation of public opinion on a stable basis only under two conditions: (1) when all users are manipulated and at the same time they are open to external influence; (2) when manipulation aims at the so called “centrists” who do not possess a clear-cut opinion on some issue.","PeriodicalId":47624,"journal":{"name":"Journal of Political Philosophy","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Invisible Political Officer: How Personalization Algorithms Shape Public Opinion\",\"authors\":\"K. Toloknev\",\"doi\":\"10.30570/2078-5089-2022-107-4-63-82\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Social media have been firmly entrenched in the modern everyday life. Still, their influence on the formation of public opinion is not well understood. An important feature of social media is that they are not neutral. Not only do people interact with each other on social media platforms, but social media themselves actively interact with people, selecting personalized content for them based on the information about their interests and behavior. In 2011, Eli Pariser hypothesized that content personalization should lead to the formation of a kind of “information cocoons”, or “filter bubbles” — homogeneous groups of users who hold similar views. However, the fragmentation of the Internet community into “filter bubbles” is not the only threat posed by the use of personalization algorithms. Even more dangerously, social media possess the ability to manipulate content selection algorithms in order to influence users’ views. The article attempts to test the reality of these threats through computational modeling. To solve this task, the author employs a simple agent-based model that simulates the impact of personalization algorithms on communication in social media. The article demonstrates that, contrary to Pariser’s hypothesis, algorithms that select content as close as possible to user preferences result in the emergence of “filter bubbles” rather rarely. The author also finds that manipulation of personalization algorithms makes it possible to influence the formation of public opinion on a stable basis only under two conditions: (1) when all users are manipulated and at the same time they are open to external influence; (2) when manipulation aims at the so called “centrists” who do not possess a clear-cut opinion on some issue.\",\"PeriodicalId\":47624,\"journal\":{\"name\":\"Journal of Political Philosophy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2022-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Political Philosophy\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.30570/2078-5089-2022-107-4-63-82\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Political Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.30570/2078-5089-2022-107-4-63-82","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

社交媒体已经在现代日常生活中根深蒂固。然而,他们对公众舆论形成的影响还没有得到很好的理解。社交媒体的一个重要特征是它们不是中立的。人们不仅在社交媒体平台上相互互动,而且社交媒体本身也积极地与人们互动,根据人们的兴趣和行为信息为他们选择个性化的内容。2011年,Eli Pariser假设,内容个性化会导致一种“信息茧”或“过滤气泡”的形成,即持有相似观点的同质用户群体。然而,互联网社区分裂成“过滤泡沫”并不是使用个性化算法所带来的唯一威胁。更危险的是,社交媒体拥有操纵内容选择算法的能力,以影响用户的观点。本文试图通过计算建模来测试这些威胁的真实性。为了解决这个问题,作者使用了一个简单的基于agent的模型来模拟个性化算法对社交媒体传播的影响。这篇文章表明,与Pariser的假设相反,选择尽可能接近用户偏好的内容的算法很少会导致“过滤气泡”的出现。作者还发现,只有在两种条件下,操纵个性化算法才有可能在稳定的基础上影响舆论的形成:(1)当所有用户都被操纵,同时他们对外部影响持开放态度;(2)当操纵的对象是在某些问题上意见不明确的所谓“中间派”时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Invisible Political Officer: How Personalization Algorithms Shape Public Opinion
Social media have been firmly entrenched in the modern everyday life. Still, their influence on the formation of public opinion is not well understood. An important feature of social media is that they are not neutral. Not only do people interact with each other on social media platforms, but social media themselves actively interact with people, selecting personalized content for them based on the information about their interests and behavior. In 2011, Eli Pariser hypothesized that content personalization should lead to the formation of a kind of “information cocoons”, or “filter bubbles” — homogeneous groups of users who hold similar views. However, the fragmentation of the Internet community into “filter bubbles” is not the only threat posed by the use of personalization algorithms. Even more dangerously, social media possess the ability to manipulate content selection algorithms in order to influence users’ views. The article attempts to test the reality of these threats through computational modeling. To solve this task, the author employs a simple agent-based model that simulates the impact of personalization algorithms on communication in social media. The article demonstrates that, contrary to Pariser’s hypothesis, algorithms that select content as close as possible to user preferences result in the emergence of “filter bubbles” rather rarely. The author also finds that manipulation of personalization algorithms makes it possible to influence the formation of public opinion on a stable basis only under two conditions: (1) when all users are manipulated and at the same time they are open to external influence; (2) when manipulation aims at the so called “centrists” who do not possess a clear-cut opinion on some issue.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.10
自引率
5.60%
发文量
17
期刊介绍: The Journal of Political Philosophy is an international journal devoted to the study of theoretical issues arising out of moral, legal and political life. It welcomes, and hopes to foster, work cutting across a variety of disciplinary concerns, among them philosophy, sociology, history, economics and political science. The journal encourages new approaches, including (but not limited to): feminism; environmentalism; critical theory, post-modernism and analytical Marxism; social and public choice theory; law and economics, critical legal studies and critical race studies; and game theoretic, socio-biological and anthropological approaches to politics. It also welcomes work in the history of political thought which builds to a larger philosophical point and work in the philosophy of the social sciences and applied ethics with broader political implications. Featuring a distinguished editorial board from major centres of thought from around the globe, the journal draws equally upon the work of non-philosophers and philosophers and provides a forum of debate between disparate factions who usually keep to their own separate journals.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信