IEEE Journal on Emerging and Selected Topics in Circuits and Systems最新文献

筛选
英文 中文
Model Agnostic Contrastive Explanations for Classification Models 分类模型与模型无关的对比解释
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-24 DOI: 10.1109/JETCAS.2024.3486114
Amit Dhurandhar;Tejaswini Pedapati;Avinash Balakrishnan;Pin-Yu Chen;Karthikeyan Shanmugam;Ruchir Puri
{"title":"Model Agnostic Contrastive Explanations for Classification Models","authors":"Amit Dhurandhar;Tejaswini Pedapati;Avinash Balakrishnan;Pin-Yu Chen;Karthikeyan Shanmugam;Ruchir Puri","doi":"10.1109/JETCAS.2024.3486114","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3486114","url":null,"abstract":"Extensive surveys on explanations that are suitable for humans, claims that an explanation being contrastive is one of its most important traits. A few methods have been proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), that can generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but also models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on tabular data. Our method is also applicable to the scenarios where only the black-box access of the model is provided, implying that we can only obtain the predictions and prediction probabilities. With the advent of larger models, it is increasingly prevalent to be working in the black-box scenario, where the user will not necessarily have access to the model weights or parameters, and will only be able to interact with the model using an API. As such, to obtain meaningful explanations we propose a principled and scalable approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of this nature where we focus on scalability and handle different data types was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively as well as qualitatively validate our approach over public datasets covering diverse domains.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"789-798"},"PeriodicalIF":3.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealing the Invisible: Unveiling Pre-Trained CNN Models Through Adversarial Examples and Timing Side-Channels 窃取隐形:通过对抗性示例和定时侧信道揭开预训练 CNN 模型的神秘面纱
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-23 DOI: 10.1109/JETCAS.2024.3485133
Shubhi Shukla;Manaar Alam;Pabitra Mitra;Debdeep Mukhopadhyay
{"title":"Stealing the Invisible: Unveiling Pre-Trained CNN Models Through Adversarial Examples and Timing Side-Channels","authors":"Shubhi Shukla;Manaar Alam;Pabitra Mitra;Debdeep Mukhopadhyay","doi":"10.1109/JETCAS.2024.3485133","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3485133","url":null,"abstract":"Machine learning, with its myriad applications, has become an integral component of numerous AI systems. A common practice in this domain is the use of transfer learning, where a pre-trained model’s architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it is crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present \u0000<monospace>ArchWhisperer</monospace>\u0000, a model fingerprinting attack approach based on the novel observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with model inference times is used to further enhance our attack in terms of attack effectiveness as well as query budget. \u0000<monospace>ArchWhisperer</monospace>\u0000 is designed for typical user-level access in remote MLaaS environments and it exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20. This is a marked improvement compared to state-of-the-art works.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"634-646"},"PeriodicalIF":3.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-18 DOI: 10.1109/JETCAS.2024.3483554
HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi
{"title":"RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision","authors":"HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi","doi":"10.1109/JETCAS.2024.3483554","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3483554","url":null,"abstract":"Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"673-687"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method 基于强化学习的 ELF 恶意样本生成方法
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-15 DOI: 10.1109/JETCAS.2024.3481273
Mingfu Xue;Jinlong Fu;Zhiyuan Li;Shifeng Ni;Heyi Wu;Leo Yu Zhang;Yushu Zhang;Weiqiang Liu
{"title":"A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method","authors":"Mingfu Xue;Jinlong Fu;Zhiyuan Li;Shifeng Ni;Heyi Wu;Leo Yu Zhang;Yushu Zhang;Weiqiang Liu","doi":"10.1109/JETCAS.2024.3481273","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3481273","url":null,"abstract":"In recent years, domestic Linux operating systems have developed rapidly, but the threat of ELF viruses has become increasingly prominent. Currently, domestic antivirus software for information technology application innovation (ITAI) operating systems shows insufficient capability in detecting ELF viruses. At the same time, research on generating malicious samples in ELF format is scarce. In order to fill this gap at home and abroad and meet the growing application needs of domestic antivirus software companies, this paper proposes an automatic ELF adversarial malicious samples generation technique based on reinforcement learning. Based on reinforcement learning framework, after being processed by cycles of feature extraction, malicious detection, agent decision-making, and evade-detection operation, the sample can evade the detection of antivirus engines. Specifically, nine feature extractor subclasses are used to extract features in multiple aspects. The PPO algorithm is used as the agent algorithm. The action table in the evade-detection module contains 11 evade-detection operations for ELF malicious samples. This method is experimentally verified on the ITAI operating system, and the ELF malicious sample set on the Linux x86 platform is used as the original sample set. The detection rate of this sample set by ClamAV before processing is 98%, and the detection rate drops to 25% after processing. The detection rate of this sample set by 360 Security before processing is 4%, and the detection rate drops to 1% after processing. Furthermore, after processing, the average number of engines on VirusTotal that could detect the maliciousness of the samples decreases from 39 to 15. Many malicious samples were detected by \u0000<inline-formula> <tex-math>$41sim 43$ </tex-math></inline-formula>\u0000 engines on VirusTotal before processing, while after the evade-detection processing, only \u0000<inline-formula> <tex-math>$8sim 9$ </tex-math></inline-formula>\u0000 engines on VirusTotal can detect the malware. In terms of executability and malicious function consistency, the processed samples can still run normally and the malicious functions remain consistent with those before processing. Overall, the proposed method in this paper can effectively generate adversarial ELF malware samples. Using this method to generate malicious samples to test and train the anti-virus software can promote and improve anti-virus software’s detection and defense capability against malware.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"743-757"},"PeriodicalIF":3.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RobustDA: Lightweight Robust Domain Adaptation for Evolving Data at Edge
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-11 DOI: 10.1109/JETCAS.2024.3478359
Xinyu Guo;Xiaojiang Zuo;Rui Han;Junyan Ouyang;Jing Xie;Chi Harold Liu;Qinglong Zhang;Ying Guo;Jing Chen;Lydia Y. Chen
{"title":"RobustDA: Lightweight Robust Domain Adaptation for Evolving Data at Edge","authors":"Xinyu Guo;Xiaojiang Zuo;Rui Han;Junyan Ouyang;Jing Xie;Chi Harold Liu;Qinglong Zhang;Ying Guo;Jing Chen;Lydia Y. Chen","doi":"10.1109/JETCAS.2024.3478359","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3478359","url":null,"abstract":"AI applications powered by deep learning models are increasingly run natively at edge. A deployed model not only encounters continuously evolving input distributions (domains) but also faces adversarial attacks from third-party. This necessitates adapting the model to shifting domains to maintain high natural accuracy, while avoiding degrading the model’s robust accuracy. However, existing domain adaptation and adversarial attack preventation techniques often have conflicting optimization objectives and they rely on time-consuming training process. This paper presents RobustDA, an on-device lightweight approach that co-optimizes natural and robust accuracies in model retraining. It uses a set of low-rank adapters to retain all learned domains’ knowledge with small overheads. In each model retraining, RobustDA constructs an adapter to separate domain-related and robust-related model parameters to avoid their conflicts in updating. Based on the retained knowledge, it quickly generates adversarial examples with high-quality pseudo-labels and uses them to accelerate the retraining process. We demonstrate that, comparing against 14 state-of-the-art DA techniques under 7 prevalent adversarial attacks on edge devices, the proposed co-optimization approach improves natural and robust accuracies by 6.34% and 11.41% simultaneously. Under the same accuracy, RobustDA also speeds up the retraining process by 4.09x.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"688-704"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditing and Generating Synthetic Data With Controllable Trust Trade-Offs
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-10 DOI: 10.1109/JETCAS.2024.3477976
Brian Belgodere;Pierre Dognin;Adam Ivankay;Igor Melnyk;Youssef Mroueh;Aleksandra Mojsilović;Jiri Navratil;Apoorva Nitsure;Inkit Padhi;Mattia Rigotti;Jerret Ross;Yair Schiff;Radhika Vedpathak;Richard A. Young
{"title":"Auditing and Generating Synthetic Data With Controllable Trust Trade-Offs","authors":"Brian Belgodere;Pierre Dognin;Adam Ivankay;Igor Melnyk;Youssef Mroueh;Aleksandra Mojsilović;Jiri Navratil;Apoorva Nitsure;Inkit Padhi;Mattia Rigotti;Jerret Ross;Yair Schiff;Radhika Vedpathak;Richard A. Young","doi":"10.1109/JETCAS.2024.3477976","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3477976","url":null,"abstract":"Real-world data often exhibits bias, imbalance, and privacy risks. Synthetic datasets have emerged to address these issues by enabling a paradigm that relies on generative AI models to generate unbiased, privacy-preserving data while maintaining fidelity to the original data. However, assessing the trustworthiness of synthetic datasets and models is a critical challenge. We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models. It focuses on preventing bias and discrimination, ensuring fidelity to the source data, and assessing utility, robustness, and privacy preservation. We demonstrate our framework’s effectiveness by auditing various generative models across diverse use cases like education, healthcare, banking, and human resources, spanning different data modalities such as tabular, time-series, vision, and natural language. This holistic assessment is essential for compliance with regulatory safeguards. We introduce a trustworthiness index to rank synthetic datasets based on their safeguards trade-offs. Furthermore, we present a trustworthiness-driven model selection and cross-validation process during training, exemplified with “TrustFormers” across various data types. This approach allows for controllable trustworthiness trade-offs in synthetic data creation. Our auditing framework fosters collaboration among stakeholders, including data scientists, governance experts, internal reviewers, external certifiers, and regulators. This transparent reporting should become a standard practice to prevent bias, discrimination, and privacy violations, ensuring compliance with policies and providing accountability, safety, and performance guarantees.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"773-788"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Overview of Trustworthy AI: Advances in IP Protection, Privacy-Preserving Federated Learning, Security Verification, and GAI Safety Alignment 可信人工智能概述:知识产权保护、保护隐私的联合学习、安全验证和 GAI 安全调整方面的进展
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-10-09 DOI: 10.1109/JETCAS.2024.3477348
Yue Zheng;Chip-Hong Chang;Shih-Hsu Huang;Pin-Yu Chen;Stjepan Picek
{"title":"An Overview of Trustworthy AI: Advances in IP Protection, Privacy-Preserving Federated Learning, Security Verification, and GAI Safety Alignment","authors":"Yue Zheng;Chip-Hong Chang;Shih-Hsu Huang;Pin-Yu Chen;Stjepan Picek","doi":"10.1109/JETCAS.2024.3477348","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3477348","url":null,"abstract":"AI has undergone a remarkable evolution journey marked by groundbreaking milestones. Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands. Understanding that no model is perfect, trustworthy AI is initiated with an intuitive aim to mitigate the harm it can inflict on people and society by prioritizing socially responsible AI ideation, design, development, and deployment towards effecting positive changes. The scope of trustworthy AI is encompassing, covering qualities such as safety, security, privacy, transparency, explainability, fairness, impartiality, robustness, reliability, and accountability. This overview paper anchors on recent advances in four research hotspots of trustworthy AI with compelling and challenging security, privacy, and safety issues. The topics discussed include the intellectual property protection of deep learning and generative models, the trustworthiness of federated learning, verification and testing tools of AI systems, and the safety alignment of generative AI systems. Through this comprehensive review, we aim to provide readers with an overview of the most up-to-date research problems and solutions. By presenting the rapidly evolving factors and constraints that motivate the emerging attack and defense strategies throughout the AI life-cycle, we hope to inspire more research effort into guiding AI technologies towards beneficial purposes with greater robustness against malicious use intent.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"582-607"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10711270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffense: Defense Against Backdoor Attacks on Deep Neural Networks With Latent Diffusion
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-09-27 DOI: 10.1109/JETCAS.2024.3469377
Bowen Hu;Chip-Hong Chang
{"title":"Diffense: Defense Against Backdoor Attacks on Deep Neural Networks With Latent Diffusion","authors":"Bowen Hu;Chip-Hong Chang","doi":"10.1109/JETCAS.2024.3469377","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3469377","url":null,"abstract":"As deep neural network (DNN) models are used in a wide variety of applications, their security has attracted considerable attention. Among the known security vulnerabilities, backdoor attacks have become the most notorious threat to users of pre-trained DNNs and machine learning services. Such attacks manipulate the training data or training process in such a way that the trained model produces a false output to an input that carries a specific trigger, but behaves normally otherwise. In this work, we propose Diffense, a method for detecting such malicious inputs based on the distribution of the latent feature maps to clean input samples of the possibly infected target DNN. By learning the feature map distribution using the diffusion model and sampling from the model under the guidance of the data to be inspected, backdoor attack data can be detected by its distance from the sampled result. Diffense does not require knowledge about the structure, weights, and training data of the target DNN model, nor does it need to be aware of the backdoor attack method. Diffense is non-intrusive. The accuracy of the target model to clean inputs will not be affected by Diffense and the inference service can be run uninterruptedly with Diffense. Extensive experiments were conducted on DNNs trained for MNIST, CIFRA-10, GSTRB, ImageNet-10, LSUN Object and LSUN Scene applications to show that the attack success rates of diverse backdoor attacks, including BadNets, IDBA, WaNet, ISSBA and HTBA, can be significantly suppressed by Diffense. The results generally exceed the performances of existing backdoor mitigation methods, including those that require model modifications or prerequisite knowledge of model weights or attack samples.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"729-742"},"PeriodicalIF":3.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Artificial Intelligence With Novel Matrix Transformations and Homomorphic Encryption 利用新型矩阵变换和同态加密实现高效人工智能
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-09-24 DOI: 10.1109/JETCAS.2024.3466849
Quoc Bao Phan;Tuy Tan Nguyen
{"title":"Efficient Artificial Intelligence With Novel Matrix Transformations and Homomorphic Encryption","authors":"Quoc Bao Phan;Tuy Tan Nguyen","doi":"10.1109/JETCAS.2024.3466849","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3466849","url":null,"abstract":"This paper addresses the challenges of data privacy and computational efficiency in artificial intelligence (AI) models by proposing a novel hybrid model that combines homomorphic encryption (HE) with AI to enhance security while maintaining learning accuracy. The novelty of our model lies in the introduction of a new matrix transformation technique that ensures compatibility with both HE algorithms and AI model weight matrices, significantly improving computational efficiency. Furthermore, we present a first-of-its-kind mathematical proof of convergence for integrating HE into AI models using the adaptive moment estimation optimization algorithm. The effectiveness and practicality of our approach for training on encrypted data are showcased through comprehensive evaluations of well-known datasets for air pollution forecasting and forest fire detection. These successful results demonstrate high model performance, with nearly 1 R-squared for air pollution forecasting and 99% accuracy for forest fire detection. Additionally, our approach achieves a reduction of up to 90% in data storage and a tenfold increase in speed compared to models that do not use the matrix transformation method. Our primary contribution lies in enhancing the security, efficiency, and dependability of AI models, particularly when dealing with sensitive data.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"717-728"},"PeriodicalIF":3.7,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re_useVFL: Reuse of Parameters-Based Verifiable Federated Learning With Privacy Preservation Using Gradient Sparsification Re_useVFL:使用梯度稀疏化重用基于参数的可验证联合学习并保护隐私
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-09-19 DOI: 10.1109/JETCAS.2024.3463738
Ningxin He;Tiegang Gao;Chuan Zhou
{"title":"Re_useVFL: Reuse of Parameters-Based Verifiable Federated Learning With Privacy Preservation Using Gradient Sparsification","authors":"Ningxin He;Tiegang Gao;Chuan Zhou","doi":"10.1109/JETCAS.2024.3463738","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3463738","url":null,"abstract":"Federated learning (FL) exhibits promising potential in the Industrial Internet of Things (IIoT) as it allows multiple institutions to collaboratively train a global model without sharing local data. However, there are still many privacy and security concerns in FL systems. The cloud server responsible for aggregating model parameters may be malicious, and it may distribute manipulated aggregation results that could launch nefarious attacks. Additionally, industrial agents may provide incomplete parameters, negatively impacting the global model’s performance. To address these issues, we introduce Re_useVFL, an efficient privacy-preserving full-process FL verification scheme. It integrates BLS-based signature verification, adaptive gradient sparsification (AdaGS), and Multi-Key CKKS encryption (MK-CKKS). Our scheme ensures the integrity of agents-uploaded parameters, the correctness of the cloud server’s aggregation results, and the consistency verification of distributed results, thereby providing comprehensive verification across the entire FL process. It also maintains validation accuracy even with some agents dropout during computation. The AdaGS algorithm notably reduces validation overhead by optimizing parameter sparsification and reuse. Additionally, employing MK-CKKS to protect agents privacy and prevent agent and server collusion. Our experiments on three datasets confirm that Re_useVFL achieves lower validation resource overhead compared to existing methods, demonstrating its practical effectiveness.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"647-660"},"PeriodicalIF":3.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信