引出llm模认知系统的问题规范

IF 2.4 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Robert E. Wray, James R. Kirk, John E. Laird
{"title":"引出llm模认知系统的问题规范","authors":"Robert E. Wray,&nbsp;James R. Kirk,&nbsp;John E. Laird","doi":"10.1016/j.cogsys.2025.101409","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) offer unprecedented natural-language understanding and generation capabilities. However, evaluations of their ability to demonstrate other cognitive functions, especially various categories of reasoning, have been, at best, mixed. The limited scope of reliable and robust LLM capabilities has resulted in a new class of AI systems, LLM-Modulo AI, in which LLMs are used to contribute to the overall capabilities of an intelligent system. In this paper, we explore the applicability of LLMs for one specific capability acutely missing in most cognitive systems: problem formulation. Cognitive systems generally require a human to translate a problem definition into some specification that the cognitive system can use to attempt to solve the problem or perform the task. We explore how large language models (LLMs) can be utilized to map a problem class, defined in natural language, into a semi-formal specification that can then be utilized by an existing reasoning and learning system to solve instances from the problem class. The result is a Modulo-LLM cognitive system in which the LLM roughly acts as a <em>cognitive task analyst</em>, generating a problem specification that can be used by a typical cognitive system to solve specific problems. The agent uses prompts derived from the definition of problem spaces in the AI literature and general problem-solving strategies (Polya’s <em>How to Solve It</em>). We offer preliminary evidence illustrating the potential for LLM-based problem specification. Such automatic problem specification offers the potential to speed cognitive systems research via disintermediation of problem formulation while also retaining core capabilities of cognitive systems, such as robust inference and online learning.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101409"},"PeriodicalIF":2.4000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eliciting problem specifications for LLM-Modulo cognitive systems\",\"authors\":\"Robert E. Wray,&nbsp;James R. Kirk,&nbsp;John E. Laird\",\"doi\":\"10.1016/j.cogsys.2025.101409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large language models (LLMs) offer unprecedented natural-language understanding and generation capabilities. However, evaluations of their ability to demonstrate other cognitive functions, especially various categories of reasoning, have been, at best, mixed. The limited scope of reliable and robust LLM capabilities has resulted in a new class of AI systems, LLM-Modulo AI, in which LLMs are used to contribute to the overall capabilities of an intelligent system. In this paper, we explore the applicability of LLMs for one specific capability acutely missing in most cognitive systems: problem formulation. Cognitive systems generally require a human to translate a problem definition into some specification that the cognitive system can use to attempt to solve the problem or perform the task. We explore how large language models (LLMs) can be utilized to map a problem class, defined in natural language, into a semi-formal specification that can then be utilized by an existing reasoning and learning system to solve instances from the problem class. The result is a Modulo-LLM cognitive system in which the LLM roughly acts as a <em>cognitive task analyst</em>, generating a problem specification that can be used by a typical cognitive system to solve specific problems. The agent uses prompts derived from the definition of problem spaces in the AI literature and general problem-solving strategies (Polya’s <em>How to Solve It</em>). We offer preliminary evidence illustrating the potential for LLM-based problem specification. Such automatic problem specification offers the potential to speed cognitive systems research via disintermediation of problem formulation while also retaining core capabilities of cognitive systems, such as robust inference and online learning.</div></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"94 \",\"pages\":\"Article 101409\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041725000890\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041725000890","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)提供了前所未有的自然语言理解和生成能力。然而,对他们展示其他认知功能的能力的评估,尤其是各种各样的推理能力,充其量也就是好坏参半。有限的可靠和强大的LLM能力导致了一类新的人工智能系统,LLM- modulo人工智能,其中LLM被用来促进智能系统的整体能力。在本文中,我们探讨了llm在大多数认知系统中严重缺失的一种特定能力的适用性:问题制定。认知系统通常需要人类将问题定义转换为认知系统可以用来尝试解决问题或执行任务的某种规范。我们探索如何利用大型语言模型(llm)将用自然语言定义的问题类映射为半形式化规范,然后由现有的推理和学习系统使用该规范来解决问题类的实例。结果是一个Modulo-LLM认知系统,其中LLM大致充当认知任务分析师,生成可被典型认知系统用于解决特定问题的问题规范。代理使用来自AI文献中的问题空间定义和一般问题解决策略(Polya的How to Solve It)的提示。我们提供了初步证据,说明了基于llm的问题规范的潜力。这种自动问题规范提供了通过问题制定的非中介化来加速认知系统研究的潜力,同时也保留了认知系统的核心能力,如鲁棒推理和在线学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Eliciting problem specifications for LLM-Modulo cognitive systems
Large language models (LLMs) offer unprecedented natural-language understanding and generation capabilities. However, evaluations of their ability to demonstrate other cognitive functions, especially various categories of reasoning, have been, at best, mixed. The limited scope of reliable and robust LLM capabilities has resulted in a new class of AI systems, LLM-Modulo AI, in which LLMs are used to contribute to the overall capabilities of an intelligent system. In this paper, we explore the applicability of LLMs for one specific capability acutely missing in most cognitive systems: problem formulation. Cognitive systems generally require a human to translate a problem definition into some specification that the cognitive system can use to attempt to solve the problem or perform the task. We explore how large language models (LLMs) can be utilized to map a problem class, defined in natural language, into a semi-formal specification that can then be utilized by an existing reasoning and learning system to solve instances from the problem class. The result is a Modulo-LLM cognitive system in which the LLM roughly acts as a cognitive task analyst, generating a problem specification that can be used by a typical cognitive system to solve specific problems. The agent uses prompts derived from the definition of problem spaces in the AI literature and general problem-solving strategies (Polya’s How to Solve It). We offer preliminary evidence illustrating the potential for LLM-based problem specification. Such automatic problem specification offers the potential to speed cognitive systems research via disintermediation of problem formulation while also retaining core capabilities of cognitive systems, such as robust inference and online learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Systems Research
Cognitive Systems Research 工程技术-计算机:人工智能
CiteScore
9.40
自引率
5.10%
发文量
40
审稿时长
>12 weeks
期刊介绍: Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial. The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition. Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信