Huzaifa Sidhpurwala, Garth Mollett, Emily Fox, Mark Bestavros, Huamin Chen
{"title":"Building trust: Foundations of security, safety, and transparency in AI","authors":"Huzaifa Sidhpurwala, Garth Mollett, Emily Fox, Mark Bestavros, Huamin Chen","doi":"10.1002/aaai.70005","DOIUrl":null,"url":null,"abstract":"<p>This paper explores the rapidly evolving ecosystem of publicly available AI models and their potential implications on the security and safety landscape. Understanding their potential risks and vulnerabilities is crucial as AI models become increasingly prevalent. We review the current security and safety scenarios while highlighting challenges such as tracking issues, remediation, and the absence of AI model lifecycle and ownership processes. Comprehensive strategies to enhance security and safety for both model developers and end-users are proposed. This paper provides several foundational pieces for more standardized security, safety, and transparency in developing and operating generative AI models and the larger open ecosystems and communities forming around them.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 2","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70005","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.70005","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper explores the rapidly evolving ecosystem of publicly available AI models and their potential implications on the security and safety landscape. Understanding their potential risks and vulnerabilities is crucial as AI models become increasingly prevalent. We review the current security and safety scenarios while highlighting challenges such as tracking issues, remediation, and the absence of AI model lifecycle and ownership processes. Comprehensive strategies to enhance security and safety for both model developers and end-users are proposed. This paper provides several foundational pieces for more standardized security, safety, and transparency in developing and operating generative AI models and the larger open ecosystems and communities forming around them.
期刊介绍:
AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.