{"title":"Privacy preservation in deep reinforcement learning: A training perspective","authors":"","doi":"10.1016/j.knosys.2024.112558","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement learning (RL) is a principled AI framework for autonomous, experience-driven learning. Deep reinforcement learning (DRL) enhances this by incorporating deep learning models, promoting a higher-level understanding of the visual world. However, privacy concerns are emerging in RL applications that involve vast amounts of private information. Recent studies have demonstrated that DRL can leak private information and be vulnerable to attacks aiming to infer the training environment from an agent’s behaviors without direct access to the environment. To address these privacy concerns, we propose a differentially private DRL approach that obfuscates the agent’s observations from each visited state. This defends against privacy leakage attacks and prevents the inference of the agent’s training environment from its optimized policy. We provide a theoretical analysis and design comprehensive experiments to thoroughly reproduce the privacy leakage attack. Both the theoretical analysis and experimental results demonstrate that our method effectively defends against privacy leakage attacks while maintaining the model utility of the RL agent.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124011924","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement learning (RL) is a principled AI framework for autonomous, experience-driven learning. Deep reinforcement learning (DRL) enhances this by incorporating deep learning models, promoting a higher-level understanding of the visual world. However, privacy concerns are emerging in RL applications that involve vast amounts of private information. Recent studies have demonstrated that DRL can leak private information and be vulnerable to attacks aiming to infer the training environment from an agent’s behaviors without direct access to the environment. To address these privacy concerns, we propose a differentially private DRL approach that obfuscates the agent’s observations from each visited state. This defends against privacy leakage attacks and prevents the inference of the agent’s training environment from its optimized policy. We provide a theoretical analysis and design comprehensive experiments to thoroughly reproduce the privacy leakage attack. Both the theoretical analysis and experimental results demonstrate that our method effectively defends against privacy leakage attacks while maintaining the model utility of the RL agent.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.