Carmelo Fabio Longo , Misael Mongiovì , Luana Bulla , Antonio Lieto
{"title":"Eliciting metaknowledge in Large Language Models","authors":"Carmelo Fabio Longo , Misael Mongiovì , Luana Bulla , Antonio Lieto","doi":"10.1016/j.cogsys.2025.101352","DOIUrl":null,"url":null,"abstract":"<div><div>The introduction of Large Language Models (LLMs) able to exhibit a number of linguistic and extra-linguistic capabilities has represented, in the last years, one of the main frontiers in Artificial Intelligence (AI) research. Researcher from various disciplines debate about whether or not, among the capabilities of LLMs, there is the one of using <em>knowledge about knowledge</em> – usually considered one of the antechambers of <em>meta-cognition</em> in cognitive agents – about a particular task in order to improve or self-correct previous errors. In this work we propose a novel fine-tuning approach for LLMs, named <span>exar</span>, based on a multi-stage process leveraging past predictions from an early version of the same, and aimed at <em>injecting</em> metacognitive features for the task of Question-Answering. The conducted experiments on <span>Llama-2-7B-chat</span> showed promising improvements on the quality of the outcomes, due to the fact that the LLM acquired the ability to detect its own wrong predictions forcing itself to repeat submissions, thorough a prompt designed to fix inadmissible predictions, whenever detected. Such detection is achieved by enquiring the same LLM acting as meta-validator, through another prompt specifically designed for such purpose.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"91 ","pages":"Article 101352"},"PeriodicalIF":2.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041725000324","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The introduction of Large Language Models (LLMs) able to exhibit a number of linguistic and extra-linguistic capabilities has represented, in the last years, one of the main frontiers in Artificial Intelligence (AI) research. Researcher from various disciplines debate about whether or not, among the capabilities of LLMs, there is the one of using knowledge about knowledge – usually considered one of the antechambers of meta-cognition in cognitive agents – about a particular task in order to improve or self-correct previous errors. In this work we propose a novel fine-tuning approach for LLMs, named exar, based on a multi-stage process leveraging past predictions from an early version of the same, and aimed at injecting metacognitive features for the task of Question-Answering. The conducted experiments on Llama-2-7B-chat showed promising improvements on the quality of the outcomes, due to the fact that the LLM acquired the ability to detect its own wrong predictions forcing itself to repeat submissions, thorough a prompt designed to fix inadmissible predictions, whenever detected. Such detection is achieved by enquiring the same LLM acting as meta-validator, through another prompt specifically designed for such purpose.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.