Peihao Li , Jie Huang , Shuaishuai Zhang , Chunyang Qi
{"title":"SecureEI:边缘智能人工智能模型的主动知识产权保护","authors":"Peihao Li , Jie Huang , Shuaishuai Zhang , Chunyang Qi","doi":"10.1016/j.comnet.2024.110825","DOIUrl":null,"url":null,"abstract":"<div><div>Deploying AI models on edge computing platforms enhances real-time performance, reduces network dependency, and ensures data privacy on terminal devices. However, these advantages come with increased risks of model leakage and misuse due to the vulnerability of edge environments to physical and cyber attacks compared to cloud-based solutions. To mitigate these risks, we propose SecureEI, a proactive intellectual property protection method for AI models that leverages model splitting and data poisoning techniques. SecureEI divides the model into two components: DeviceNet, which processes input data into protected license data, and EdgeNet, which operates on the license data to perform the intended tasks. This method ensures that only the transformed license data yields high model accuracy, while original data remains unrecognizable, even under fine-tuning attacks. We further employ targeted training strategies and weight adjustments to enhance the model’s resistance to potential attacks that aim to restore its recognition capabilities for original data. Evaluations on MNIST, Cifar10, and FaceScrub datasets demonstrate that SecureEI not only maintains high model accuracy on license data but also significantly bolsters defense against fine-tuning attacks, outperforming existing state-of-the-art techniques in safeguarding AI intellectual property on edge platforms.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SecureEI: Proactive intellectual property protection of AI models for edge intelligence\",\"authors\":\"Peihao Li , Jie Huang , Shuaishuai Zhang , Chunyang Qi\",\"doi\":\"10.1016/j.comnet.2024.110825\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deploying AI models on edge computing platforms enhances real-time performance, reduces network dependency, and ensures data privacy on terminal devices. However, these advantages come with increased risks of model leakage and misuse due to the vulnerability of edge environments to physical and cyber attacks compared to cloud-based solutions. To mitigate these risks, we propose SecureEI, a proactive intellectual property protection method for AI models that leverages model splitting and data poisoning techniques. SecureEI divides the model into two components: DeviceNet, which processes input data into protected license data, and EdgeNet, which operates on the license data to perform the intended tasks. This method ensures that only the transformed license data yields high model accuracy, while original data remains unrecognizable, even under fine-tuning attacks. We further employ targeted training strategies and weight adjustments to enhance the model’s resistance to potential attacks that aim to restore its recognition capabilities for original data. Evaluations on MNIST, Cifar10, and FaceScrub datasets demonstrate that SecureEI not only maintains high model accuracy on license data but also significantly bolsters defense against fine-tuning attacks, outperforming existing state-of-the-art techniques in safeguarding AI intellectual property on edge platforms.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128624006571\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624006571","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
SecureEI: Proactive intellectual property protection of AI models for edge intelligence
Deploying AI models on edge computing platforms enhances real-time performance, reduces network dependency, and ensures data privacy on terminal devices. However, these advantages come with increased risks of model leakage and misuse due to the vulnerability of edge environments to physical and cyber attacks compared to cloud-based solutions. To mitigate these risks, we propose SecureEI, a proactive intellectual property protection method for AI models that leverages model splitting and data poisoning techniques. SecureEI divides the model into two components: DeviceNet, which processes input data into protected license data, and EdgeNet, which operates on the license data to perform the intended tasks. This method ensures that only the transformed license data yields high model accuracy, while original data remains unrecognizable, even under fine-tuning attacks. We further employ targeted training strategies and weight adjustments to enhance the model’s resistance to potential attacks that aim to restore its recognition capabilities for original data. Evaluations on MNIST, Cifar10, and FaceScrub datasets demonstrate that SecureEI not only maintains high model accuracy on license data but also significantly bolsters defense against fine-tuning attacks, outperforming existing state-of-the-art techniques in safeguarding AI intellectual property on edge platforms.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.