Al和假新闻、虚假信息和算法偏见特刊编辑说明

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS
Donghee Shin, Kerk F. Kee
{"title":"Al和假新闻、虚假信息和算法偏见特刊编辑说明","authors":"Donghee Shin, Kerk F. Kee","doi":"10.1080/08838151.2023.2225665","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) continues to shape the lives of media users today (Wölker & Powell, 2021). Search engines, social media, and other over-thetop service platforms are fueled by data automated and organized through AI and algorithms, which in turn control users and markets. Similarly, the platformization of news and journalism is a growing trend (Dijck et al., 2018). The process of platformization is increasingly facilitating the economic, organizational, and social extensions of digital platforms into online and media ecosystems, fundamentally changing the operations of media industries and journalistic practices. Recently, platformization accelerated due to the drastic breakthroughs in machine learning. Specifically, it is machine learning algorithms that enable different sets of automated processes that transform input data into desired output (Dijck et al., 2018). Algorithms play a key role in curating what information is considered most relevant to users. While popular and effective in practice, these features come with the risk of systematic discrimination, limited transparency, and vague accountability (Moller et al., 2018). While algorithmic filtering may lead to more impartial, thus possibly fairer, processes than those controlled by humans, the process of algorithmic recommendation has been criticized for the tendency to amplify and/or reproduce biases, distort facts, generate information asymmetry, and reinforce process opacity (Ananny & Crawford, 2018). Simply put, algorithmic biases may further compound the algorithmic injustice that machine learning automates and perpetuates. AI-powered platforms have markedly contributed to the rapid diffusion of fake news, mis(dis)information, and deepfakes, which are the detrimental byproducts of platformization (Dan et al., 2021). Misinformation spreads more rapidly and broadly than reliable information does, jeopardizing the credibility of algorithmic journalism. Issues regarding how to safeguard the goals, values, and automated processes of platformization, how to counter fake news, how to discern misinformation, and how to regain media trust in a world of AI remain controversial (Shin, 2023). At the root of these questions are concerns about how to mitigate biases and discriminations in data, JOURNAL OF BROADCASTING & ELECTRONIC MEDIA 2023, VOL. 67, NO. 3, 241–245 https://doi.org/10.1080/08838151.2023.2225665","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Editorial Note for Special Issue on Al and Fake News, Mis(dis)information, and Algorithmic Bias\",\"authors\":\"Donghee Shin, Kerk F. Kee\",\"doi\":\"10.1080/08838151.2023.2225665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) continues to shape the lives of media users today (Wölker & Powell, 2021). Search engines, social media, and other over-thetop service platforms are fueled by data automated and organized through AI and algorithms, which in turn control users and markets. Similarly, the platformization of news and journalism is a growing trend (Dijck et al., 2018). The process of platformization is increasingly facilitating the economic, organizational, and social extensions of digital platforms into online and media ecosystems, fundamentally changing the operations of media industries and journalistic practices. Recently, platformization accelerated due to the drastic breakthroughs in machine learning. Specifically, it is machine learning algorithms that enable different sets of automated processes that transform input data into desired output (Dijck et al., 2018). Algorithms play a key role in curating what information is considered most relevant to users. While popular and effective in practice, these features come with the risk of systematic discrimination, limited transparency, and vague accountability (Moller et al., 2018). While algorithmic filtering may lead to more impartial, thus possibly fairer, processes than those controlled by humans, the process of algorithmic recommendation has been criticized for the tendency to amplify and/or reproduce biases, distort facts, generate information asymmetry, and reinforce process opacity (Ananny & Crawford, 2018). Simply put, algorithmic biases may further compound the algorithmic injustice that machine learning automates and perpetuates. AI-powered platforms have markedly contributed to the rapid diffusion of fake news, mis(dis)information, and deepfakes, which are the detrimental byproducts of platformization (Dan et al., 2021). Misinformation spreads more rapidly and broadly than reliable information does, jeopardizing the credibility of algorithmic journalism. Issues regarding how to safeguard the goals, values, and automated processes of platformization, how to counter fake news, how to discern misinformation, and how to regain media trust in a world of AI remain controversial (Shin, 2023). At the root of these questions are concerns about how to mitigate biases and discriminations in data, JOURNAL OF BROADCASTING & ELECTRONIC MEDIA 2023, VOL. 67, NO. 3, 241–245 https://doi.org/10.1080/08838151.2023.2225665\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1080/08838151.2023.2225665\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/08838151.2023.2225665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 1

摘要

人工智能(AI)继续塑造着当今媒体用户的生活(Wölker&Powell,2021)。搜索引擎、社交媒体和其他顶级服务平台由人工智能和算法自动化和组织的数据推动,这些数据反过来控制着用户和市场。同样,新闻和新闻学的平台化也是一种日益增长的趋势(Dijck et al.,2018)。平台化进程越来越促进数字平台向在线和媒体生态系统的经济、组织和社会扩展,从根本上改变了媒体行业的运营和新闻实践。最近,由于机器学习的巨大突破,平台化加速了。具体而言,正是机器学习算法实现了将输入数据转换为所需输出的不同自动化过程集(Dijck等人,2018)。算法在管理被认为与用户最相关的信息方面发挥着关键作用。虽然这些特征在实践中流行且有效,但也存在系统性歧视、透明度有限和责任模糊的风险(Moller等人,2018)。虽然算法过滤可能会导致比人类控制的过程更公正,因此可能更公平的过程,但算法推荐过程因倾向于放大和/或复制偏见、扭曲事实、产生信息不对称和强化过程不透明而受到批评(Ananny&Crawford,2018)。简单地说,算法偏见可能会进一步加剧机器学习自动化并使其永久化的算法不公正。人工智能平台显著促进了假新闻、虚假信息和深度伪造的快速传播,这些都是平台化的有害副产品(Dan et al.,2021)。虚假信息比可靠信息传播得更快、更广,危及算法新闻的可信度。关于如何保护平台化的目标、价值观和自动化流程,如何对抗假新闻,如何辨别错误信息,以及如何在人工智能世界中重新获得媒体信任,这些问题仍然存在争议(Shin,2023)。这些问题的根源是对如何减轻数据中的偏见和歧视的担忧,《广播与电子媒体杂志2023》,第67卷,第3期,241-245https://doi.org/10.1080/08838151.2023.2225665
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Editorial Note for Special Issue on Al and Fake News, Mis(dis)information, and Algorithmic Bias
Artificial intelligence (AI) continues to shape the lives of media users today (Wölker & Powell, 2021). Search engines, social media, and other over-thetop service platforms are fueled by data automated and organized through AI and algorithms, which in turn control users and markets. Similarly, the platformization of news and journalism is a growing trend (Dijck et al., 2018). The process of platformization is increasingly facilitating the economic, organizational, and social extensions of digital platforms into online and media ecosystems, fundamentally changing the operations of media industries and journalistic practices. Recently, platformization accelerated due to the drastic breakthroughs in machine learning. Specifically, it is machine learning algorithms that enable different sets of automated processes that transform input data into desired output (Dijck et al., 2018). Algorithms play a key role in curating what information is considered most relevant to users. While popular and effective in practice, these features come with the risk of systematic discrimination, limited transparency, and vague accountability (Moller et al., 2018). While algorithmic filtering may lead to more impartial, thus possibly fairer, processes than those controlled by humans, the process of algorithmic recommendation has been criticized for the tendency to amplify and/or reproduce biases, distort facts, generate information asymmetry, and reinforce process opacity (Ananny & Crawford, 2018). Simply put, algorithmic biases may further compound the algorithmic injustice that machine learning automates and perpetuates. AI-powered platforms have markedly contributed to the rapid diffusion of fake news, mis(dis)information, and deepfakes, which are the detrimental byproducts of platformization (Dan et al., 2021). Misinformation spreads more rapidly and broadly than reliable information does, jeopardizing the credibility of algorithmic journalism. Issues regarding how to safeguard the goals, values, and automated processes of platformization, how to counter fake news, how to discern misinformation, and how to regain media trust in a world of AI remain controversial (Shin, 2023). At the root of these questions are concerns about how to mitigate biases and discriminations in data, JOURNAL OF BROADCASTING & ELECTRONIC MEDIA 2023, VOL. 67, NO. 3, 241–245 https://doi.org/10.1080/08838151.2023.2225665
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信