Regulating Algorithmic Assemblages: Exploring Beyond Corporate AI Ethics

Md.Mafiqul Islam
{"title":"Regulating Algorithmic Assemblages: Exploring Beyond Corporate AI Ethics","authors":"Md.Mafiqul Islam","doi":"10.60087/jklst.vol3.n3.p28","DOIUrl":null,"url":null,"abstract":"The rapid advancement of artificial intelligence (AI) systems, fueled by extensive research and development investments, has ushered in a new era where AI permeates decision-making processes across various sectors. This proliferation is largely attributed to the availability of vast digital datasets, particularly in machine learning, enabling AI systems to discern intricate correlations and furnish valuable insights from data on human behavior and other phenomena. However, the widespread integration of AI into private and public domains has raised concerns regarding the neutrality and objectivity of automated decision-making processes. Such systems, despite their technological sophistication, are not immune to biases and ethical dilemmas inherent in human judgments. Consequently, there is a growing call for regulatory oversight to ensure transparency and accountability in AI deployment, akin to traditional regulatory frameworks governing analogous processes. This paper critically examines the implications and ripple effects of incorporating AI into existing social systems from an 'AI ethics' standpoint. It questions the adequacy of self-policing mechanisms advocated by corporate entities, highlighting inherent limitations in corporate social responsibility paradigms. Additionally, it scrutinizes well-intentioned regulatory initiatives, such as the EU AI ethics initiative, which may overlook broader societal impacts while prioritizing the desirability of AI applications. The discussion underscores the necessity of adopting a holistic approach that transcends individual and group rights considerations to address the profound societal implications of AI, encapsulated by the concept of 'algorithmic assemblage'.","PeriodicalId":106651,"journal":{"name":"Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online)","volume":" 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.60087/jklst.vol3.n3.p28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid advancement of artificial intelligence (AI) systems, fueled by extensive research and development investments, has ushered in a new era where AI permeates decision-making processes across various sectors. This proliferation is largely attributed to the availability of vast digital datasets, particularly in machine learning, enabling AI systems to discern intricate correlations and furnish valuable insights from data on human behavior and other phenomena. However, the widespread integration of AI into private and public domains has raised concerns regarding the neutrality and objectivity of automated decision-making processes. Such systems, despite their technological sophistication, are not immune to biases and ethical dilemmas inherent in human judgments. Consequently, there is a growing call for regulatory oversight to ensure transparency and accountability in AI deployment, akin to traditional regulatory frameworks governing analogous processes. This paper critically examines the implications and ripple effects of incorporating AI into existing social systems from an 'AI ethics' standpoint. It questions the adequacy of self-policing mechanisms advocated by corporate entities, highlighting inherent limitations in corporate social responsibility paradigms. Additionally, it scrutinizes well-intentioned regulatory initiatives, such as the EU AI ethics initiative, which may overlook broader societal impacts while prioritizing the desirability of AI applications. The discussion underscores the necessity of adopting a holistic approach that transcends individual and group rights considerations to address the profound societal implications of AI, encapsulated by the concept of 'algorithmic assemblage'.
规范算法组合:探索企业人工智能伦理之外的东西
在大量研发投资的推动下,人工智能(AI)系统突飞猛进,开创了一个新时代,人工智能已渗透到各行各业的决策过程中。这种扩散主要归功于大量数字数据集的可用性,特别是在机器学习方面,使人工智能系统能够从有关人类行为和其他现象的数据中发现错综复杂的关联并提供有价值的见解。然而,人工智能在私人和公共领域的广泛应用引发了人们对自动决策过程的中立性和客观性的担忧。此类系统尽管技术先进,但也无法避免人类判断中固有的偏见和道德困境。因此,越来越多的人呼吁进行监管,以确保人工智能部署的透明度和问责制,类似于管理类似流程的传统监管框架。本文从 "人工智能伦理 "的角度,批判性地探讨了将人工智能纳入现有社会系统的影响和连锁反应。它质疑企业实体所倡导的自我监督机制是否充分,强调了企业社会责任范式的固有局限性。此外,它还审视了欧盟人工智能伦理倡议等用心良苦的监管举措,这些举措在优先考虑人工智能应用的可取性的同时,可能会忽视更广泛的社会影响。讨论强调有必要采取一种超越个人和群体权利考虑的整体方法,以解决人工智能的深刻社会影响,"算法集合 "的概念概括了这一点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信