大型语言模型中元知识的提取

IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Carmelo Fabio Longo , Misael Mongiovì , Luana Bulla , Antonio Lieto
{"title":"大型语言模型中元知识的提取","authors":"Carmelo Fabio Longo ,&nbsp;Misael Mongiovì ,&nbsp;Luana Bulla ,&nbsp;Antonio Lieto","doi":"10.1016/j.cogsys.2025.101352","DOIUrl":null,"url":null,"abstract":"<div><div>The introduction of Large Language Models (LLMs) able to exhibit a number of linguistic and extra-linguistic capabilities has represented, in the last years, one of the main frontiers in Artificial Intelligence (AI) research. Researcher from various disciplines debate about whether or not, among the capabilities of LLMs, there is the one of using <em>knowledge about knowledge</em> – usually considered one of the antechambers of <em>meta-cognition</em> in cognitive agents – about a particular task in order to improve or self-correct previous errors. In this work we propose a novel fine-tuning approach for LLMs, named <span>exar</span>, based on a multi-stage process leveraging past predictions from an early version of the same, and aimed at <em>injecting</em> metacognitive features for the task of Question-Answering. The conducted experiments on <span>Llama-2-7B-chat</span> showed promising improvements on the quality of the outcomes, due to the fact that the LLM acquired the ability to detect its own wrong predictions forcing itself to repeat submissions, thorough a prompt designed to fix inadmissible predictions, whenever detected. Such detection is achieved by enquiring the same LLM acting as meta-validator, through another prompt specifically designed for such purpose.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"91 ","pages":"Article 101352"},"PeriodicalIF":2.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eliciting metaknowledge in Large Language Models\",\"authors\":\"Carmelo Fabio Longo ,&nbsp;Misael Mongiovì ,&nbsp;Luana Bulla ,&nbsp;Antonio Lieto\",\"doi\":\"10.1016/j.cogsys.2025.101352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The introduction of Large Language Models (LLMs) able to exhibit a number of linguistic and extra-linguistic capabilities has represented, in the last years, one of the main frontiers in Artificial Intelligence (AI) research. Researcher from various disciplines debate about whether or not, among the capabilities of LLMs, there is the one of using <em>knowledge about knowledge</em> – usually considered one of the antechambers of <em>meta-cognition</em> in cognitive agents – about a particular task in order to improve or self-correct previous errors. In this work we propose a novel fine-tuning approach for LLMs, named <span>exar</span>, based on a multi-stage process leveraging past predictions from an early version of the same, and aimed at <em>injecting</em> metacognitive features for the task of Question-Answering. The conducted experiments on <span>Llama-2-7B-chat</span> showed promising improvements on the quality of the outcomes, due to the fact that the LLM acquired the ability to detect its own wrong predictions forcing itself to repeat submissions, thorough a prompt designed to fix inadmissible predictions, whenever detected. Such detection is achieved by enquiring the same LLM acting as meta-validator, through another prompt specifically designed for such purpose.</div></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"91 \",\"pages\":\"Article 101352\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041725000324\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041725000324","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在过去的几年里,能够展示多种语言和语言外能力的大型语言模型(llm)的引入已经成为人工智能(AI)研究的主要前沿之一。来自不同学科的研究人员争论法学硕士的能力中是否有利用关于知识的知识的能力——通常被认为是认知代理中元认知的前厅之一——关于特定任务以改进或自我纠正先前的错误。在这项工作中,我们提出了一种新的llm微调方法,名为exar,基于多阶段过程,利用来自早期版本的过去预测,旨在为问答任务注入元认知特征。在Llama-2-7B-chat上进行的实验显示,结果的质量有了很大的提高,因为LLM获得了检测自己错误预测的能力,迫使自己重复提交,在检测到错误预测时,通过提示来修复不可接受的预测。这种检测是通过查询作为元验证器的同一LLM来实现的,通过专门为此目的设计的另一个提示符。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Eliciting metaknowledge in Large Language Models
The introduction of Large Language Models (LLMs) able to exhibit a number of linguistic and extra-linguistic capabilities has represented, in the last years, one of the main frontiers in Artificial Intelligence (AI) research. Researcher from various disciplines debate about whether or not, among the capabilities of LLMs, there is the one of using knowledge about knowledge – usually considered one of the antechambers of meta-cognition in cognitive agents – about a particular task in order to improve or self-correct previous errors. In this work we propose a novel fine-tuning approach for LLMs, named exar, based on a multi-stage process leveraging past predictions from an early version of the same, and aimed at injecting metacognitive features for the task of Question-Answering. The conducted experiments on Llama-2-7B-chat showed promising improvements on the quality of the outcomes, due to the fact that the LLM acquired the ability to detect its own wrong predictions forcing itself to repeat submissions, thorough a prompt designed to fix inadmissible predictions, whenever detected. Such detection is achieved by enquiring the same LLM acting as meta-validator, through another prompt specifically designed for such purpose.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Systems Research
Cognitive Systems Research 工程技术-计算机:人工智能
CiteScore
9.40
自引率
5.10%
发文量
40
审稿时长
>12 weeks
期刊介绍: Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial. The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition. Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信