A Study on Media Coverage of Algorithms under Framing Theory: The Case of Reports on Algorithms from Xinhua News Agency from 2021 to 2022

Beiya Shi, Yu Pei
{"title":"A Study on Media Coverage of Algorithms under Framing Theory: The Case of Reports on Algorithms from Xinhua News Agency from 2021 to 2022","authors":"Beiya Shi, Yu Pei","doi":"10.32996/jhsss.2023.5.5.11","DOIUrl":null,"url":null,"abstract":"The continuous improvement and wide array of applications of algorithms have demonstrated both the promise of technological power and the perils of new problems and some entrenched social issues. At this time, mainstream media being a vital channel for information dissemination, their coverage of algorithms can reflect different attitudes and expectations towards algorithmic development from diverse groups. Thereby, the objective of this study is to explore how influential media tell the story of algorithms to people, how individuals from distinct walks view this contentious topic, and, accordingly, what we can expect to witness in the following report on algorithms from news agencies. Based on the Framing Theory, this study systematically analyzes the news reports about algorithms published by Xinhua News Agency over the past two years (2021-2022) on the basis of the thematic framework, responsibility framework, and emotional framework. The result of it reveals that Xinhua News Agency has paid much attention to the issues of regulation and governance of algorithms from the perspective of the government. Ethical concerns regarding its fair usage are also highly debated among the general public. Companies and research institutions mainly focus on publicizing their latest achievements in innovative applications and technological breakthroughs, while the former is absent from the discussions and draft of regulations in this line. Worthy of note, negative reports on this domain were prevalent among the public, while the Chinese government, businesses and research institutions tended to approach algorithm-related topics from a positive or neutral standpoint. In the future, relevant disputes will persist in the public opinion field, and people could enhance their algorithmic literacy throughout the process. Governance and development may go ahead in step. Co-governance with the engagement of enterprises and diversification of algorithmic applications both signify a promising tomorrow of this technology.","PeriodicalId":431386,"journal":{"name":"Journal of Humanities and Social Sciences Studies","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Humanities and Social Sciences Studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32996/jhsss.2023.5.5.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The continuous improvement and wide array of applications of algorithms have demonstrated both the promise of technological power and the perils of new problems and some entrenched social issues. At this time, mainstream media being a vital channel for information dissemination, their coverage of algorithms can reflect different attitudes and expectations towards algorithmic development from diverse groups. Thereby, the objective of this study is to explore how influential media tell the story of algorithms to people, how individuals from distinct walks view this contentious topic, and, accordingly, what we can expect to witness in the following report on algorithms from news agencies. Based on the Framing Theory, this study systematically analyzes the news reports about algorithms published by Xinhua News Agency over the past two years (2021-2022) on the basis of the thematic framework, responsibility framework, and emotional framework. The result of it reveals that Xinhua News Agency has paid much attention to the issues of regulation and governance of algorithms from the perspective of the government. Ethical concerns regarding its fair usage are also highly debated among the general public. Companies and research institutions mainly focus on publicizing their latest achievements in innovative applications and technological breakthroughs, while the former is absent from the discussions and draft of regulations in this line. Worthy of note, negative reports on this domain were prevalent among the public, while the Chinese government, businesses and research institutions tended to approach algorithm-related topics from a positive or neutral standpoint. In the future, relevant disputes will persist in the public opinion field, and people could enhance their algorithmic literacy throughout the process. Governance and development may go ahead in step. Co-governance with the engagement of enterprises and diversification of algorithmic applications both signify a promising tomorrow of this technology.
框架理论下算法的媒体报道研究——以新华社2021 - 2022年算法报道为例
算法的不断改进和广泛应用既展示了技术力量的前景,也展示了新问题和一些根深蒂固的社会问题的危险。此时,主流媒体作为信息传播的重要渠道,其对算法的报道可以反映不同群体对算法发展的不同态度和期望。因此,本研究的目的是探讨有影响力的媒体如何向人们讲述算法的故事,来自不同行业的个人如何看待这个有争议的话题,以及相应地,我们可以期待在以下新闻机构关于算法的报道中看到什么。基于框架理论,本研究以主题框架、责任框架、情感框架为基础,系统分析了近两年(2021-2022)新华社发布的算法类新闻报道。其结果表明,新华社从政府的角度对算法的监管和治理问题给予了高度重视。关于其公平使用的伦理问题也在公众中引起了激烈的争论。企业和科研机构主要侧重于宣传其在创新应用和技术突破方面的最新成果,而企业和科研机构则缺席这方面的讨论和法规的起草。值得注意的是,关于这一领域的负面报道在公众中普遍存在,而中国政府、企业和研究机构倾向于从积极或中立的角度来处理与算法相关的话题。未来,相关争议将持续存在于舆论领域,人们可以在整个过程中提升自己的算法素养。治理和发展可能会同步进行。企业参与的共同治理和算法应用的多样化都预示着该技术的美好未来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信