Ethical Considerations in AI: Navigating Bias, Fairness, and Accountability

Rahul Jain, Deepti Pathak, Manav Chandan, Rigveda Convent, Hr. Sec. School Kathua
{"title":"Ethical Considerations in AI: Navigating Bias, Fairness, and Accountability","authors":"Rahul Jain, Deepti Pathak, Manav Chandan, Rigveda Convent, Hr. Sec. School Kathua","doi":"10.48047/resmil.v10i1.22","DOIUrl":null,"url":null,"abstract":": The reconciliation of man-made reasoning (artificial intelligence) and enormous information examination in dynamic cycles has introduced another period of mechanical headways and extraordinary abilities across different areas. Notwithstanding, this expanding collaboration has likewise caused a corresponding ascent in moral quandaries and contemplations. This examination article explores the complex scene of moral issues in man-made intelligence fueled direction, with a specific accentuation on the moral contemplations related with large information driven choice cycles. Drawing from an extensive survey of the current writing, this article enlightens the different moral systems pertinent to man-made intelligence and enormous information morals. It takes apart unambiguous moral issues that arise with regards to computer based intelligence navigation, including algorithmic predisposition, straightforwardness, and responsibility, while likewise investigating the unpredictable moral contemplations involved in the assortment and use of huge information, like information protection, security, and informed assent. To empirically investigate the scope and repercussions of these ethical quandaries, the study employs a mixed-method approach that combines qualitative and quantitative data analysis. The discoveries highlight the squeezing need to create and carry out moral structures to direct artificial intelligence and huge information navigation, as well as to propose useful suggestions for alleviating these moral difficulties.","PeriodicalId":517991,"journal":{"name":"resmilitaris","volume":"20 S32","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"resmilitaris","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48047/resmil.v10i1.22","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

: The reconciliation of man-made reasoning (artificial intelligence) and enormous information examination in dynamic cycles has introduced another period of mechanical headways and extraordinary abilities across different areas. Notwithstanding, this expanding collaboration has likewise caused a corresponding ascent in moral quandaries and contemplations. This examination article explores the complex scene of moral issues in man-made intelligence fueled direction, with a specific accentuation on the moral contemplations related with large information driven choice cycles. Drawing from an extensive survey of the current writing, this article enlightens the different moral systems pertinent to man-made intelligence and enormous information morals. It takes apart unambiguous moral issues that arise with regards to computer based intelligence navigation, including algorithmic predisposition, straightforwardness, and responsibility, while likewise investigating the unpredictable moral contemplations involved in the assortment and use of huge information, like information protection, security, and informed assent. To empirically investigate the scope and repercussions of these ethical quandaries, the study employs a mixed-method approach that combines qualitative and quantitative data analysis. The discoveries highlight the squeezing need to create and carry out moral structures to direct artificial intelligence and huge information navigation, as well as to propose useful suggestions for alleviating these moral difficulties.
人工智能的伦理考量:驾驭偏见、公平和问责
:人工推理(人工智能)与海量信息检查在动态循环中的调和,为不同领域带来了新一轮的机械进步和非凡能力。尽管如此,这种不断扩大的合作同样引起了相应的道德问题和思考。本文探讨了人工智能推动下的复杂道德问题,特别强调了与大型信息驱动选择循环相关的道德思考。通过对现有文献的广泛调查,本文揭示了与人工智能和海量信息道德相关的不同道德体系。文章剖析了计算机智能导航中出现的明确的道德问题,包括算法预设、直接性和责任,同时还研究了海量信息的分类和使用中涉及的不可预知的道德思考,如信息保护、安全和知情同意。为了对这些道德难题的范围和影响进行实证调查,本研究采用了一种混合方法,将定性和定量数据分析相结合。这些发现凸显了创建和实施道德结构以指导人工智能和海量信息导航的迫切需求,同时也为缓解这些道德难题提出了有益的建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信