Ethics and Transparency Issues in Digital Platforms: An Overview

IF 3.1 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Leilasadat Mirghaderi, Monika Sziron, Elisabeth Hildt
{"title":"Ethics and Transparency Issues in Digital Platforms: An Overview","authors":"Leilasadat Mirghaderi, Monika Sziron, Elisabeth Hildt","doi":"10.3390/ai4040042","DOIUrl":null,"url":null,"abstract":"There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI (Basel, Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/ai4040042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization.
数字平台中的道德和透明度问题:综述
利用人工智能(AI)的数字平台在我们的日常生活中的应用越来越多。在这方面,透明度和责任制问题仍然是有待有效处理的主要关切问题。本文的目的是确定数字平台背景下的不透明区域,并为改善数字平台上的透明度问题提供建议。首先,通过调查文献和反思平台化的概念,选择一个可以被不同利益相关者采用的人工智能定义,并利用人工智能伦理,我们将确定数字平台背景下的不透明区域。其次,在确定了不透明的区域之后,我们超越了对现有文献的简单总结,并就如何解决提出的问题提供了我们的观点。通过文献调查,我们发现数字平台存在三个主要的不透明区域。这些问题包括:谁在为平台做出贡献方面缺乏透明度;在平台背后工作的人、这些工人的贡献以及数字工人的工作条件方面缺乏透明度;在算法的开发和管理方面缺乏透明度。考虑到文献中大量不能轻易操作的高级原则,这是一种弥合原则和操作化之间差距的尝试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
0.00%
发文量
0
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信