LLMs as models for analogical reasoning

IF 3 1区 心理学 Q1 LINGUISTICS
Sam Musker , Alex Duchnowski , Raphaël Millière , Ellie Pavlick
{"title":"LLMs as models for analogical reasoning","authors":"Sam Musker ,&nbsp;Alex Duchnowski ,&nbsp;Raphaël Millière ,&nbsp;Ellie Pavlick","doi":"10.1016/j.jml.2025.104676","DOIUrl":null,"url":null,"abstract":"<div><div>Analogical reasoning — the capacity to identify and map structural relationships between different domains — is fundamental to human cognition and learning. Recent studies have shown that large language models (LLMs) can sometimes match humans in analogical reasoning tasks, opening the possibility that analogical reasoning might emerge from domain-general processes. However, it is still debated whether these emergent capacities are largely superficial and limited to simple relations seen during training or whether they encompass the flexible representational and mapping capabilities which are the focus of leading cognitive models of analogy. In this study, we introduce novel analogical reasoning tasks that require participants to map between semantically contentful words and sequences of letters and other abstract characters. This task necessitates the ability to flexibly <em>re-represent</em> rich semantic information—an ability which is known to be central to human analogy but which is thus far not well-captured by existing cognitive theories and models. We assess the performance of both human participants and LLMs on tasks focusing on reasoning from semantic structure and semantic content, introducing variations that test the robustness of their analogical inferences. Advanced LLMs match human performance across several conditions, though humans and LLMs respond differently to certain task variations and semantic distractors. Our results thus provide new evidence that LLMs might offer a <em>how-possibly</em> explanation of human analogical reasoning in contexts that are not yet well modeled by existing theories, but that even today’s best models are unlikely to yield <em>how-actually</em> explanations.</div></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":"145 ","pages":"Article 104676"},"PeriodicalIF":3.0000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of memory and language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0749596X25000695","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Analogical reasoning — the capacity to identify and map structural relationships between different domains — is fundamental to human cognition and learning. Recent studies have shown that large language models (LLMs) can sometimes match humans in analogical reasoning tasks, opening the possibility that analogical reasoning might emerge from domain-general processes. However, it is still debated whether these emergent capacities are largely superficial and limited to simple relations seen during training or whether they encompass the flexible representational and mapping capabilities which are the focus of leading cognitive models of analogy. In this study, we introduce novel analogical reasoning tasks that require participants to map between semantically contentful words and sequences of letters and other abstract characters. This task necessitates the ability to flexibly re-represent rich semantic information—an ability which is known to be central to human analogy but which is thus far not well-captured by existing cognitive theories and models. We assess the performance of both human participants and LLMs on tasks focusing on reasoning from semantic structure and semantic content, introducing variations that test the robustness of their analogical inferences. Advanced LLMs match human performance across several conditions, though humans and LLMs respond differently to certain task variations and semantic distractors. Our results thus provide new evidence that LLMs might offer a how-possibly explanation of human analogical reasoning in contexts that are not yet well modeled by existing theories, but that even today’s best models are unlikely to yield how-actually explanations.
法学硕士作为类比推理的模型
类比推理——识别和绘制不同领域之间结构关系的能力——是人类认知和学习的基础。最近的研究表明,大型语言模型(llm)有时可以在类比推理任务中与人类相匹配,这开启了从领域通用过程中出现类比推理的可能性。然而,这些涌现能力是否很大程度上是肤浅的,局限于训练期间看到的简单关系,或者它们是否包含灵活的表征和映射能力,这是领先的类比认知模型的焦点,仍然存在争议。在这项研究中,我们引入了新的类比推理任务,要求参与者在语义内容的单词和字母序列以及其他抽象字符之间进行映射。这项任务需要灵活地重新表示丰富的语义信息的能力——一种已知的人类类比的核心能力,但迄今为止还没有被现有的认知理论和模型很好地捕获。我们评估了人类参与者和法学硕士在从语义结构和语义内容进行推理的任务上的表现,并引入了测试其类比推理鲁棒性的变化。尽管人类和llm对某些任务变化和语义干扰的反应不同,但高级llm在几种情况下的表现与人类相当。因此,我们的研究结果提供了新的证据,证明法学硕士可能会在现有理论尚未很好地建模的情况下为人类类比推理提供一种可能的解释,但即使是当今最好的模型也不太可能产生实际的解释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.70
自引率
14.00%
发文量
49
审稿时长
12.7 weeks
期刊介绍: Articles in the Journal of Memory and Language contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes. Special emphasis is given to research articles that provide new theoretical insights based on a carefully laid empirical foundation. The journal generally favors articles that provide multiple experiments. In addition, significant theoretical papers without new experimental findings may be published. The Journal of Memory and Language is a valuable tool for cognitive scientists, including psychologists, linguists, and others interested in memory and learning, language, reading, and speech. Research Areas include: • Topics that illuminate aspects of memory or language processing • Linguistics • Neuropsychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信