人道主义危机中与处理个人数据和人工智能相关的伦理影响:范围审查。

IF 3 1区 哲学 Q1 ETHICS
Tino Kreutzer, James Orbinski, Lora Appel, Aijun An, Jerome Marston, Ella Boone, Patrick Vinck
{"title":"人道主义危机中与处理个人数据和人工智能相关的伦理影响:范围审查。","authors":"Tino Kreutzer, James Orbinski, Lora Appel, Aijun An, Jerome Marston, Ella Boone, Patrick Vinck","doi":"10.1186/s12910-025-01189-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises.</p><p><strong>Methods: </strong>We systematically searched databases to identify peer-reviewed studies published since 2010. Data and findings were standardized, grouping ethical issues into the value categories of autonomy, beneficence, non-maleficence, and justice. The study protocol followed Arksey and O'Malley's approach and PRISMA reporting guidelines.</p><p><strong>Results: </strong>We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.</p><p><strong>Conclusions: </strong>Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations' autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices.</p><p><strong>Trial registration: </strong>Not applicable.</p>","PeriodicalId":55348,"journal":{"name":"BMC Medical Ethics","volume":"26 1","pages":"49"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11998222/pdf/","citationCount":"0","resultStr":"{\"title\":\"Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review.\",\"authors\":\"Tino Kreutzer, James Orbinski, Lora Appel, Aijun An, Jerome Marston, Ella Boone, Patrick Vinck\",\"doi\":\"10.1186/s12910-025-01189-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises.</p><p><strong>Methods: </strong>We systematically searched databases to identify peer-reviewed studies published since 2010. Data and findings were standardized, grouping ethical issues into the value categories of autonomy, beneficence, non-maleficence, and justice. The study protocol followed Arksey and O'Malley's approach and PRISMA reporting guidelines.</p><p><strong>Results: </strong>We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.</p><p><strong>Conclusions: </strong>Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations' autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices.</p><p><strong>Trial registration: </strong>Not applicable.</p>\",\"PeriodicalId\":55348,\"journal\":{\"name\":\"BMC Medical Ethics\",\"volume\":\"26 1\",\"pages\":\"49\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11998222/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1186/s12910-025-01189-2\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1186/s12910-025-01189-2","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:人道主义组织正在迅速扩大对数据的使用,以追求业务效益和效率方面的收益。目前的人道主义数据保护指南日益认识到伦理风险,特别是人工智能数据处理带来的伦理风险,但尚未充分解决。本研究报告了一项范围审查,该审查描绘了关于受人道主义危机影响的人的数据处理的学术文献中提出的伦理问题的范围。方法:我们系统地检索数据库,以确定自2010年以来发表的同行评议研究。数据和发现是标准化的,将伦理问题分组到自治,慈善,非恶意和正义的价值类别。研究方案遵循Arksey和O'Malley的方法和PRISMA报告指南。结果:我们确定了16,200个独特的记录,并保留了218个相关研究。近三分之一(n = 66)的人讨论了与人工智能相关的技术。17项研究的作者来自中低收入国家,4项研究的作者来自低收入国家。我们确定了22个伦理问题,然后按照自主、仁慈、非恶意和正义四个伦理价值类别进行分组。略多于一半的纳入研究(n = 113)基于现实世界的例子确定了道德问题。被提及最多的道德问题(n = 134)是在个人或敏感数据可能无意中与第三方共享的情况下对隐私的担忧。除了人工智能,这些研究中最常讨论的技术包括社交媒体、众包和地图工具。结论:研究强调了人道主义背景下的数据处理可能造成额外伤害,可能无法提供直接利益,可能限制受影响人口的自主权,并可能导致稀缺资源的不公平分配的重大担忧。用于人道主义援助的人工智能工具部署的增加加剧了这些担忧。迫切需要制定具体、全面的指导方针、培训和审计方法来应对这些道德挑战。此外,受到人道主义危机严重影响的低收入和中等收入国家的实证研究对于确保包容和多样化的观点至关重要。这项研究应该关注新兴的人工智能系统以及已建立的人道主义数据管理实践的伦理影响。试验注册:不适用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review.

Background: Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises.

Methods: We systematically searched databases to identify peer-reviewed studies published since 2010. Data and findings were standardized, grouping ethical issues into the value categories of autonomy, beneficence, non-maleficence, and justice. The study protocol followed Arksey and O'Malley's approach and PRISMA reporting guidelines.

Results: We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.

Conclusions: Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations' autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices.

Trial registration: Not applicable.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMC Medical Ethics
BMC Medical Ethics MEDICAL ETHICS-
CiteScore
5.20
自引率
7.40%
发文量
108
审稿时长
>12 weeks
期刊介绍: BMC Medical Ethics is an open access journal publishing original peer-reviewed research articles in relation to the ethical aspects of biomedical research and clinical practice, including professional choices and conduct, medical technologies, healthcare systems and health policies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信