{"title":"Controlled Sensing and Anomaly Detection Via Soft Actor-Critic Reinforcement Learning","authors":"Chen Zhong, M. C. Gursoy, Senem Velipasalar","doi":"10.1109/icassp43922.2022.9747436","DOIUrl":null,"url":null,"abstract":"To address the anomaly detection problem in the presence of noisy observations and to tackle the tuning and efficient exploration challenges that arise in deep reinforcement learning algorithms, we in this paper propose a soft actor-critic deep reinforcement learning framework. To evaluate the proposed framework, we measure its performance in terms of detection accuracy, stopping time, and the total number of samples needed for detection. Via simulation results, we demonstrate the performance when soft actor-critic algorithms are employed, and identify the impact of key parameters, such as the sensing cost, on the performance. In all results, we further provide comparisons between the performances of the proposed soft actor-critic and conventional actor-critic algorithms.","PeriodicalId":272439,"journal":{"name":"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"163 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icassp43922.2022.9747436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
To address the anomaly detection problem in the presence of noisy observations and to tackle the tuning and efficient exploration challenges that arise in deep reinforcement learning algorithms, we in this paper propose a soft actor-critic deep reinforcement learning framework. To evaluate the proposed framework, we measure its performance in terms of detection accuracy, stopping time, and the total number of samples needed for detection. Via simulation results, we demonstrate the performance when soft actor-critic algorithms are employed, and identify the impact of key parameters, such as the sensing cost, on the performance. In all results, we further provide comparisons between the performances of the proposed soft actor-critic and conventional actor-critic algorithms.