Reinforcement Learning for Guiding the E Theorem Prover

Jack McKeown, G. Sutcliffe
{"title":"Reinforcement Learning for Guiding the E Theorem Prover","authors":"Jack McKeown, G. Sutcliffe","doi":"10.32473/flairs.36.133334","DOIUrl":null,"url":null,"abstract":"Automated Theorem Proving (ATP) systems search for aproof in a rapidly growing space of possibilities. Heuristicshave a profound impact on search, and ATP systems makeheavy use of heuristics. This work uses reinforcement learn-ing to learn a metaheuristic that decides which heuristic to useat each step of a proof search in the E ATP system. Proximalpolicy optimization is used to dynamically select a heuristicfrom a fixed set, based on the current state of E. The approachis evaluated on its ability to reduce the number of inferencesteps used in successful proof searches, as an indicator of in-telligent search.","PeriodicalId":302103,"journal":{"name":"The International FLAIRS Conference Proceedings","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International FLAIRS Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32473/flairs.36.133334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Automated Theorem Proving (ATP) systems search for aproof in a rapidly growing space of possibilities. Heuristicshave a profound impact on search, and ATP systems makeheavy use of heuristics. This work uses reinforcement learn-ing to learn a metaheuristic that decides which heuristic to useat each step of a proof search in the E ATP system. Proximalpolicy optimization is used to dynamically select a heuristicfrom a fixed set, based on the current state of E. The approachis evaluated on its ability to reduce the number of inferencesteps used in successful proof searches, as an indicator of in-telligent search.
用于指导E定理证明的强化学习
自动定理证明(ATP)系统在快速增长的可能性空间中寻找证明。启发式对搜索产生了深远的影响,ATP系统大量使用了启发式。这项工作使用强化学习来学习一种元启发式算法,该算法决定在E ATP系统的证明搜索的每一步使用哪种启发式算法。基于e的当前状态,使用Proximalpolicy优化从固定集合中动态选择启发式算法。该方法评估了其减少成功证明搜索中使用的推理步骤数量的能力,作为智能搜索的指标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信