{"title":"Linear Complementary Dual Codes Constructed from Reinforcement Learning","authors":"Yansheng Wu, Jin Ma, Shandong Yang","doi":"arxiv-2409.08114","DOIUrl":null,"url":null,"abstract":"Recently, Linear Complementary Dual (LCD) codes have garnered substantial\ninterest within coding theory research due to their diverse applications and\nfavorable attributes. This paper directs its attention to the construction of\nbinary and ternary LCD codes leveraging curiosity-driven reinforcement learning\n(RL). By establishing reward and devising well-reasoned mappings from actions\nto states, it aims to facilitate the successful synthesis of binary or ternary\nLCD codes. Experimental results indicate that LCD codes constructed using RL\nexhibit slightly superior error-correction performance compared to those\nconventionally constructed LCD codes and those developed via standard RL\nmethodologies. The paper introduces novel binary and ternary LCD codes with\nenhanced minimum distance bounds. Finally, it showcases how Random Network\nDistillation aids agents in exploring beyond local optima, enhancing the\noverall performance of the models without compromising convergence.","PeriodicalId":501082,"journal":{"name":"arXiv - MATH - Information Theory","volume":"74 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, Linear Complementary Dual (LCD) codes have garnered substantial
interest within coding theory research due to their diverse applications and
favorable attributes. This paper directs its attention to the construction of
binary and ternary LCD codes leveraging curiosity-driven reinforcement learning
(RL). By establishing reward and devising well-reasoned mappings from actions
to states, it aims to facilitate the successful synthesis of binary or ternary
LCD codes. Experimental results indicate that LCD codes constructed using RL
exhibit slightly superior error-correction performance compared to those
conventionally constructed LCD codes and those developed via standard RL
methodologies. The paper introduces novel binary and ternary LCD codes with
enhanced minimum distance bounds. Finally, it showcases how Random Network
Distillation aids agents in exploring beyond local optima, enhancing the
overall performance of the models without compromising convergence.