Arya Grayeli, Atharva Sehgal, Omar Costilla-Reyes, Miles Cranmer, Swarat Chaudhuri
{"title":"利用学习概念库进行符号回归","authors":"Arya Grayeli, Atharva Sehgal, Omar Costilla-Reyes, Miles Cranmer, Swarat Chaudhuri","doi":"arxiv-2409.09359","DOIUrl":null,"url":null,"abstract":"We present a novel method for symbolic regression (SR), the task of searching\nfor compact programmatic hypotheses that best explain a dataset. The problem is\ncommonly solved using genetic algorithms; we show that we can enhance such\nmethods by inducing a library of abstract textual concepts. Our algorithm,\ncalled LaSR, uses zero-shot queries to a large language model (LLM) to discover\nand evolve concepts occurring in known high-performing hypotheses. We discover\nnew hypotheses using a mix of standard evolutionary steps and LLM-guided steps\n(obtained through zero-shot LLM queries) conditioned on discovered concepts.\nOnce discovered, hypotheses are used in a new round of concept abstraction and\nevolution. We validate LaSR on the Feynman equations, a popular SR benchmark,\nas well as a set of synthetic tasks. On these benchmarks, LaSR substantially\noutperforms a variety of state-of-the-art SR approaches based on deep learning\nand evolutionary algorithms. Moreover, we show that LaSR can be used to\ndiscover a novel and powerful scaling law for LLMs.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"100 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Symbolic Regression with a Learned Concept Library\",\"authors\":\"Arya Grayeli, Atharva Sehgal, Omar Costilla-Reyes, Miles Cranmer, Swarat Chaudhuri\",\"doi\":\"arxiv-2409.09359\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a novel method for symbolic regression (SR), the task of searching\\nfor compact programmatic hypotheses that best explain a dataset. The problem is\\ncommonly solved using genetic algorithms; we show that we can enhance such\\nmethods by inducing a library of abstract textual concepts. Our algorithm,\\ncalled LaSR, uses zero-shot queries to a large language model (LLM) to discover\\nand evolve concepts occurring in known high-performing hypotheses. We discover\\nnew hypotheses using a mix of standard evolutionary steps and LLM-guided steps\\n(obtained through zero-shot LLM queries) conditioned on discovered concepts.\\nOnce discovered, hypotheses are used in a new round of concept abstraction and\\nevolution. We validate LaSR on the Feynman equations, a popular SR benchmark,\\nas well as a set of synthetic tasks. On these benchmarks, LaSR substantially\\noutperforms a variety of state-of-the-art SR approaches based on deep learning\\nand evolutionary algorithms. Moreover, we show that LaSR can be used to\\ndiscover a novel and powerful scaling law for LLMs.\",\"PeriodicalId\":501033,\"journal\":{\"name\":\"arXiv - CS - Symbolic Computation\",\"volume\":\"100 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Symbolic Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09359\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
我们为符号回归(SR)提出了一种新方法,符号回归的任务是搜索最能解释数据集的简洁程序假设。这个问题通常使用遗传算法来解决;我们的研究表明,我们可以通过诱导抽象文本概念库来增强这种方法。我们的算法称为 LaSR,它使用对大型语言模型 (LLM) 的零点查询来发现和演化出现在已知高效假设中的概念。我们使用标准进化步骤和 LLM 引导步骤(通过零次 LLM 查询获得)的组合,以发现的概念为条件,发现新的假设。我们在费曼方程(一种流行的 SR 基准)和一组合成任务上验证了 LaSR。在这些基准测试中,LaSR 的性能大大优于各种基于深度学习和进化算法的先进 SR 方法。此外,我们还展示了 LaSR 可用于发现 LLMs 的新颖而强大的缩放规律。
Symbolic Regression with a Learned Concept Library
We present a novel method for symbolic regression (SR), the task of searching
for compact programmatic hypotheses that best explain a dataset. The problem is
commonly solved using genetic algorithms; we show that we can enhance such
methods by inducing a library of abstract textual concepts. Our algorithm,
called LaSR, uses zero-shot queries to a large language model (LLM) to discover
and evolve concepts occurring in known high-performing hypotheses. We discover
new hypotheses using a mix of standard evolutionary steps and LLM-guided steps
(obtained through zero-shot LLM queries) conditioned on discovered concepts.
Once discovered, hypotheses are used in a new round of concept abstraction and
evolution. We validate LaSR on the Feynman equations, a popular SR benchmark,
as well as a set of synthetic tasks. On these benchmarks, LaSR substantially
outperforms a variety of state-of-the-art SR approaches based on deep learning
and evolutionary algorithms. Moreover, we show that LaSR can be used to
discover a novel and powerful scaling law for LLMs.