人工智能安全实践和公众认知:历史分析、调查见解和加权评分框架

IF 4.3
Maikel Leon
{"title":"人工智能安全实践和公众认知:历史分析、调查见解和加权评分框架","authors":"Maikel Leon","doi":"10.1016/j.iswa.2025.200583","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial Intelligence (AI) safety has evolved in tandem with advances in technology and shifts in societal attitudes. This article presents a historical and empirical analysis of AI safety concerns from the mid-twentieth century to the present, integrating archival records, media narratives, survey data, landmark research, and regulatory developments. Early anxieties (rooted in Cold War geopolitics and science fiction) focused on physical robots and autonomous weapons. In contrast, contemporary debates focus on algorithmic bias, misinformation, job displacement, and existential risks posed by advanced systems, such as Large Language Models (LLMs). This article examines the impact of key scholarly contributions, significant events, and regulatory milestones on public perception and governance approaches. Building on this context, this study proposes an improved LLM safety scoring system that prioritizes existential risk mitigation, transparency, and governance accountability. Applying the proposed framework to leading AI developers reveals significant variation in safety commitments. The results underscore how weighting choices affect rankings. Comparative analysis with existing indices highlights the importance of nuanced, multidimensional evaluation methods. The paper concludes by identifying pressing governance challenges, including the need for global cooperation, robust interpretability, and ongoing monitoring of harm in high-stakes domains. These findings demonstrate that AI safety is not static but somewhat shaped by historical context, technical capabilities, and societal values—requiring the continuous adaptation of both policy and evaluation frameworks to align AI systems with human interests.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200583"},"PeriodicalIF":4.3000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI safety practices and public perception: Historical analysis, survey insights, and a weighted scoring framework\",\"authors\":\"Maikel Leon\",\"doi\":\"10.1016/j.iswa.2025.200583\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Artificial Intelligence (AI) safety has evolved in tandem with advances in technology and shifts in societal attitudes. This article presents a historical and empirical analysis of AI safety concerns from the mid-twentieth century to the present, integrating archival records, media narratives, survey data, landmark research, and regulatory developments. Early anxieties (rooted in Cold War geopolitics and science fiction) focused on physical robots and autonomous weapons. In contrast, contemporary debates focus on algorithmic bias, misinformation, job displacement, and existential risks posed by advanced systems, such as Large Language Models (LLMs). This article examines the impact of key scholarly contributions, significant events, and regulatory milestones on public perception and governance approaches. Building on this context, this study proposes an improved LLM safety scoring system that prioritizes existential risk mitigation, transparency, and governance accountability. Applying the proposed framework to leading AI developers reveals significant variation in safety commitments. The results underscore how weighting choices affect rankings. Comparative analysis with existing indices highlights the importance of nuanced, multidimensional evaluation methods. The paper concludes by identifying pressing governance challenges, including the need for global cooperation, robust interpretability, and ongoing monitoring of harm in high-stakes domains. These findings demonstrate that AI safety is not static but somewhat shaped by historical context, technical capabilities, and societal values—requiring the continuous adaptation of both policy and evaluation frameworks to align AI systems with human interests.</div></div>\",\"PeriodicalId\":100684,\"journal\":{\"name\":\"Intelligent Systems with Applications\",\"volume\":\"28 \",\"pages\":\"Article 200583\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Systems with Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667305325001097\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325001097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着技术的进步和社会态度的转变,人工智能(AI)安全性也在不断发展。本文对20世纪中期至今的人工智能安全问题进行了历史和实证分析,整合了档案记录、媒体叙述、调查数据、里程碑式研究和监管发展。早期的焦虑(源于冷战地缘政治和科幻小说)集中在实体机器人和自主武器上。相比之下,当代的争论集中在算法偏见、错误信息、工作取代以及大型语言模型(llm)等先进系统带来的存在风险上。本文考察了关键学术贡献、重大事件和监管里程碑对公众认知和治理方法的影响。在此背景下,本研究提出了一个改进的法学硕士安全评分系统,优先考虑存在风险缓解、透明度和治理问责制。将提出的框架应用于领先的人工智能开发人员,会发现他们在安全承诺方面存在显著差异。结果强调了权重选择是如何影响排名的。与现有指标的对比分析凸显了细致入微、多维度评价方法的重要性。本文最后指出了紧迫的治理挑战,包括对全球合作的需求、强有力的可解释性以及对高风险领域危害的持续监测。这些发现表明,人工智能的安全性不是静态的,而是在一定程度上受到历史背景、技术能力和社会价值观的影响,需要不断调整政策和评估框架,使人工智能系统与人类利益保持一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI safety practices and public perception: Historical analysis, survey insights, and a weighted scoring framework
Artificial Intelligence (AI) safety has evolved in tandem with advances in technology and shifts in societal attitudes. This article presents a historical and empirical analysis of AI safety concerns from the mid-twentieth century to the present, integrating archival records, media narratives, survey data, landmark research, and regulatory developments. Early anxieties (rooted in Cold War geopolitics and science fiction) focused on physical robots and autonomous weapons. In contrast, contemporary debates focus on algorithmic bias, misinformation, job displacement, and existential risks posed by advanced systems, such as Large Language Models (LLMs). This article examines the impact of key scholarly contributions, significant events, and regulatory milestones on public perception and governance approaches. Building on this context, this study proposes an improved LLM safety scoring system that prioritizes existential risk mitigation, transparency, and governance accountability. Applying the proposed framework to leading AI developers reveals significant variation in safety commitments. The results underscore how weighting choices affect rankings. Comparative analysis with existing indices highlights the importance of nuanced, multidimensional evaluation methods. The paper concludes by identifying pressing governance challenges, including the need for global cooperation, robust interpretability, and ongoing monitoring of harm in high-stakes domains. These findings demonstrate that AI safety is not static but somewhat shaped by historical context, technical capabilities, and societal values—requiring the continuous adaptation of both policy and evaluation frameworks to align AI systems with human interests.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信