确保人工智能模型的安全:全面的网络安全方法

Adedeji Olugboja
{"title":"确保人工智能模型的安全:全面的网络安全方法","authors":"Adedeji Olugboja","doi":"10.14738/abr.123.16770","DOIUrl":null,"url":null,"abstract":"As artificial intelligence (AI) becomes integral to diverse applications, the imperative to secure AI models against evolving threats has gained paramount importance. This paper presents a novel cybersecurity framework tailored explicitly for AI models, synthesizing insights from a comprehensive literature review, real-world case studies, and practical implementation strategies. Drawing from seminal works on adversarial attacks, data privacy, and secure deployment practices, the framework addresses vulnerabilities throughout the AI development lifecycle. Preliminary results indicate a significant enhancement in the resilience of AI models, demonstrating reduced success rates of adversarial attacks, effective data encryption, and robust secure deployment practices. The framework's adaptability across diverse use cases underscores its practicality. These findings mark a crucial step toward establishing comprehensive and practical cybersecurity measures, contributing to the ongoing discourse on securing the expanding field of artificial intelligence. Ongoing efforts involve further validation, optimization, and exploration of additional security measures to fortify AI models in an ever-changing threat landscape.","PeriodicalId":72277,"journal":{"name":"Archives of business research","volume":"32 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Securing Artificial Intelligence Models: A Comprehensive Cybersecurity Approach\",\"authors\":\"Adedeji Olugboja\",\"doi\":\"10.14738/abr.123.16770\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As artificial intelligence (AI) becomes integral to diverse applications, the imperative to secure AI models against evolving threats has gained paramount importance. This paper presents a novel cybersecurity framework tailored explicitly for AI models, synthesizing insights from a comprehensive literature review, real-world case studies, and practical implementation strategies. Drawing from seminal works on adversarial attacks, data privacy, and secure deployment practices, the framework addresses vulnerabilities throughout the AI development lifecycle. Preliminary results indicate a significant enhancement in the resilience of AI models, demonstrating reduced success rates of adversarial attacks, effective data encryption, and robust secure deployment practices. The framework's adaptability across diverse use cases underscores its practicality. These findings mark a crucial step toward establishing comprehensive and practical cybersecurity measures, contributing to the ongoing discourse on securing the expanding field of artificial intelligence. Ongoing efforts involve further validation, optimization, and exploration of additional security measures to fortify AI models in an ever-changing threat landscape.\",\"PeriodicalId\":72277,\"journal\":{\"name\":\"Archives of business research\",\"volume\":\"32 6\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Archives of business research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14738/abr.123.16770\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of business research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14738/abr.123.16770","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能(AI)在各种应用中变得不可或缺,确保人工智能模型免受不断变化的威胁变得至关重要。本文综合了全面的文献综述、现实世界案例研究和实际实施策略,提出了一个专门为人工智能模型量身定制的新型网络安全框架。该框架借鉴了对抗性攻击、数据隐私和安全部署实践方面的开创性著作,解决了整个人工智能开发生命周期中的漏洞问题。初步结果表明,人工智能模型的复原力显著增强,表明对抗性攻击的成功率降低,数据加密有效,安全部署实践稳健。该框架在各种使用案例中的适应性凸显了其实用性。这些发现标志着向建立全面、实用的网络安全措施迈出了关键一步,为正在进行的有关确保不断扩大的人工智能领域安全的讨论做出了贡献。目前正在进行的工作包括进一步验证、优化和探索更多安全措施,以便在不断变化的威胁环境中强化人工智能模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Securing Artificial Intelligence Models: A Comprehensive Cybersecurity Approach
As artificial intelligence (AI) becomes integral to diverse applications, the imperative to secure AI models against evolving threats has gained paramount importance. This paper presents a novel cybersecurity framework tailored explicitly for AI models, synthesizing insights from a comprehensive literature review, real-world case studies, and practical implementation strategies. Drawing from seminal works on adversarial attacks, data privacy, and secure deployment practices, the framework addresses vulnerabilities throughout the AI development lifecycle. Preliminary results indicate a significant enhancement in the resilience of AI models, demonstrating reduced success rates of adversarial attacks, effective data encryption, and robust secure deployment practices. The framework's adaptability across diverse use cases underscores its practicality. These findings mark a crucial step toward establishing comprehensive and practical cybersecurity measures, contributing to the ongoing discourse on securing the expanding field of artificial intelligence. Ongoing efforts involve further validation, optimization, and exploration of additional security measures to fortify AI models in an ever-changing threat landscape.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信