{"title":"AI safety practices and public perception: Historical analysis, survey insights, and a weighted scoring framework","authors":"Maikel Leon","doi":"10.1016/j.iswa.2025.200583","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial Intelligence (AI) safety has evolved in tandem with advances in technology and shifts in societal attitudes. This article presents a historical and empirical analysis of AI safety concerns from the mid-twentieth century to the present, integrating archival records, media narratives, survey data, landmark research, and regulatory developments. Early anxieties (rooted in Cold War geopolitics and science fiction) focused on physical robots and autonomous weapons. In contrast, contemporary debates focus on algorithmic bias, misinformation, job displacement, and existential risks posed by advanced systems, such as Large Language Models (LLMs). This article examines the impact of key scholarly contributions, significant events, and regulatory milestones on public perception and governance approaches. Building on this context, this study proposes an improved LLM safety scoring system that prioritizes existential risk mitigation, transparency, and governance accountability. Applying the proposed framework to leading AI developers reveals significant variation in safety commitments. The results underscore how weighting choices affect rankings. Comparative analysis with existing indices highlights the importance of nuanced, multidimensional evaluation methods. The paper concludes by identifying pressing governance challenges, including the need for global cooperation, robust interpretability, and ongoing monitoring of harm in high-stakes domains. These findings demonstrate that AI safety is not static but somewhat shaped by historical context, technical capabilities, and societal values—requiring the continuous adaptation of both policy and evaluation frameworks to align AI systems with human interests.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200583"},"PeriodicalIF":4.3000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325001097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence (AI) safety has evolved in tandem with advances in technology and shifts in societal attitudes. This article presents a historical and empirical analysis of AI safety concerns from the mid-twentieth century to the present, integrating archival records, media narratives, survey data, landmark research, and regulatory developments. Early anxieties (rooted in Cold War geopolitics and science fiction) focused on physical robots and autonomous weapons. In contrast, contemporary debates focus on algorithmic bias, misinformation, job displacement, and existential risks posed by advanced systems, such as Large Language Models (LLMs). This article examines the impact of key scholarly contributions, significant events, and regulatory milestones on public perception and governance approaches. Building on this context, this study proposes an improved LLM safety scoring system that prioritizes existential risk mitigation, transparency, and governance accountability. Applying the proposed framework to leading AI developers reveals significant variation in safety commitments. The results underscore how weighting choices affect rankings. Comparative analysis with existing indices highlights the importance of nuanced, multidimensional evaluation methods. The paper concludes by identifying pressing governance challenges, including the need for global cooperation, robust interpretability, and ongoing monitoring of harm in high-stakes domains. These findings demonstrate that AI safety is not static but somewhat shaped by historical context, technical capabilities, and societal values—requiring the continuous adaptation of both policy and evaluation frameworks to align AI systems with human interests.