大型语言模型的伦理评价及其优化

Yujing Lyu, Yanyong Du
{"title":"大型语言模型的伦理评价及其优化","authors":"Yujing Lyu,&nbsp;Yanyong Du","doi":"10.1007/s43681-024-00654-9","DOIUrl":null,"url":null,"abstract":"<div><p>The utilization of large language models (LLMs)has experienced tremendous growth in the past few years, bringing numerous benefits and conveniences. Yet, this expansion has also underscored ethical concerns, including issues such as hallucinations, toxic content, biased data and other unintended consequences. While the governance of these risks has garnered attention, a comprehensive and rigorous analysis of ethical evaluation connected to LLMs remains lacking. Against the background, this paper conducts an analysis of 105 assessment tools developed by governmental agencies, academic institutions, research groups, and technology corporations. The findings reveal a convergence emerging of these assessment principles, primarily focusing on data ethic, bias, discrimination and fairness, safety, robustness, human preferences alignment, particular ethical scenarios, responsibility, transparency and interpretability, and public participation. The study also presents the limitations of current ethical assessments paired with a critical analysis. This involves considering the collaboration between various institutions while taking into account the general public, the necessity of incorporating multidimensional real-world ethical contexts and related datasets, and the importance of integrating worldwide AI ethics guidelines with the ethical evaluation of LLMs. Such optimization can be incorporated into future evaluation efforts, aligning the technical advancements of LLMs with ethical considerations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4579 - 4592"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The ethical evaluation of large language models and its optimization\",\"authors\":\"Yujing Lyu,&nbsp;Yanyong Du\",\"doi\":\"10.1007/s43681-024-00654-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The utilization of large language models (LLMs)has experienced tremendous growth in the past few years, bringing numerous benefits and conveniences. Yet, this expansion has also underscored ethical concerns, including issues such as hallucinations, toxic content, biased data and other unintended consequences. While the governance of these risks has garnered attention, a comprehensive and rigorous analysis of ethical evaluation connected to LLMs remains lacking. Against the background, this paper conducts an analysis of 105 assessment tools developed by governmental agencies, academic institutions, research groups, and technology corporations. The findings reveal a convergence emerging of these assessment principles, primarily focusing on data ethic, bias, discrimination and fairness, safety, robustness, human preferences alignment, particular ethical scenarios, responsibility, transparency and interpretability, and public participation. The study also presents the limitations of current ethical assessments paired with a critical analysis. This involves considering the collaboration between various institutions while taking into account the general public, the necessity of incorporating multidimensional real-world ethical contexts and related datasets, and the importance of integrating worldwide AI ethics guidelines with the ethical evaluation of LLMs. Such optimization can be incorporated into future evaluation efforts, aligning the technical advancements of LLMs with ethical considerations.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 5\",\"pages\":\"4579 - 4592\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-024-00654-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00654-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)的使用在过去几年中经历了巨大的增长,带来了许多好处和便利。然而,这种扩张也突显了伦理问题,包括幻觉、有毒内容、有偏见的数据和其他意想不到的后果等问题。虽然这些风险的治理已经引起了人们的关注,但与法学硕士相关的道德评估仍然缺乏全面而严格的分析。在此背景下,本文对政府机构、学术机构、研究团体和技术公司开发的105种评估工具进行了分析。研究结果表明,这些评估原则正在趋同,主要集中在数据伦理、偏见、歧视和公平、安全性、稳健性、人类偏好一致性、特定伦理情景、责任、透明度和可解释性以及公众参与等方面。该研究还提出了与批判性分析相结合的当前伦理评估的局限性。这包括考虑到不同机构之间的合作,同时考虑到公众,纳入多维现实世界伦理背景和相关数据集的必要性,以及将全球人工智能伦理准则与法学硕士伦理评估相结合的重要性。这种优化可以纳入未来的评估工作,使法学硕士的技术进步与道德考虑保持一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The ethical evaluation of large language models and its optimization

The utilization of large language models (LLMs)has experienced tremendous growth in the past few years, bringing numerous benefits and conveniences. Yet, this expansion has also underscored ethical concerns, including issues such as hallucinations, toxic content, biased data and other unintended consequences. While the governance of these risks has garnered attention, a comprehensive and rigorous analysis of ethical evaluation connected to LLMs remains lacking. Against the background, this paper conducts an analysis of 105 assessment tools developed by governmental agencies, academic institutions, research groups, and technology corporations. The findings reveal a convergence emerging of these assessment principles, primarily focusing on data ethic, bias, discrimination and fairness, safety, robustness, human preferences alignment, particular ethical scenarios, responsibility, transparency and interpretability, and public participation. The study also presents the limitations of current ethical assessments paired with a critical analysis. This involves considering the collaboration between various institutions while taking into account the general public, the necessity of incorporating multidimensional real-world ethical contexts and related datasets, and the importance of integrating worldwide AI ethics guidelines with the ethical evaluation of LLMs. Such optimization can be incorporated into future evaluation efforts, aligning the technical advancements of LLMs with ethical considerations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信