{"title":"大语言模型时代的知识表示与获取:通过pac语义学习推理的思考","authors":"Ionela G. Mocanu, Vaishak Belle","doi":"10.1016/j.nlp.2023.100036","DOIUrl":null,"url":null,"abstract":"<div><p>Human beings are known for their remarkable ability to comprehend, analyse, and interpret common sense knowledge. This ability is critical for exhibiting intelligent behaviour, often defined as a mapping from beliefs to actions, which has led to attempts to formalize and capture explicit representations in the form of databases, knowledge bases, and ontologies in AI agents.</p><p>But in the era of large language models (LLMs), this emphasis might seem unnecessary. After all, these models already capture the extent of human knowledge and can infer appropriate things from it (presumably) as per some innate logical rules. The question then is whether they can also be trained to perform mathematical computations.</p><p>Although the consensus on the reliability of such models is still being studied, early results do seem to suggest they do not offer logically and mathematically consistent results. In this short summary article, we articulate the motivations for still caring about logical/symbolic artefacts and representations, and report on recent progress in learning to reason via the so-called probably approximately correct (PAC)-semantics.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"5 ","pages":"Article 100036"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294971912300033X/pdfft?md5=bf1ac9b507bf03d01852d71158a672d4&pid=1-s2.0-S294971912300033X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics\",\"authors\":\"Ionela G. Mocanu, Vaishak Belle\",\"doi\":\"10.1016/j.nlp.2023.100036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Human beings are known for their remarkable ability to comprehend, analyse, and interpret common sense knowledge. This ability is critical for exhibiting intelligent behaviour, often defined as a mapping from beliefs to actions, which has led to attempts to formalize and capture explicit representations in the form of databases, knowledge bases, and ontologies in AI agents.</p><p>But in the era of large language models (LLMs), this emphasis might seem unnecessary. After all, these models already capture the extent of human knowledge and can infer appropriate things from it (presumably) as per some innate logical rules. The question then is whether they can also be trained to perform mathematical computations.</p><p>Although the consensus on the reliability of such models is still being studied, early results do seem to suggest they do not offer logically and mathematically consistent results. In this short summary article, we articulate the motivations for still caring about logical/symbolic artefacts and representations, and report on recent progress in learning to reason via the so-called probably approximately correct (PAC)-semantics.</p></div>\",\"PeriodicalId\":100944,\"journal\":{\"name\":\"Natural Language Processing Journal\",\"volume\":\"5 \",\"pages\":\"Article 100036\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S294971912300033X/pdfft?md5=bf1ac9b507bf03d01852d71158a672d4&pid=1-s2.0-S294971912300033X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Natural Language Processing Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S294971912300033X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S294971912300033X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Knowledge representation and acquisition in the era of large language models: Reflections on learning to reason via PAC-Semantics
Human beings are known for their remarkable ability to comprehend, analyse, and interpret common sense knowledge. This ability is critical for exhibiting intelligent behaviour, often defined as a mapping from beliefs to actions, which has led to attempts to formalize and capture explicit representations in the form of databases, knowledge bases, and ontologies in AI agents.
But in the era of large language models (LLMs), this emphasis might seem unnecessary. After all, these models already capture the extent of human knowledge and can infer appropriate things from it (presumably) as per some innate logical rules. The question then is whether they can also be trained to perform mathematical computations.
Although the consensus on the reliability of such models is still being studied, early results do seem to suggest they do not offer logically and mathematically consistent results. In this short summary article, we articulate the motivations for still caring about logical/symbolic artefacts and representations, and report on recent progress in learning to reason via the so-called probably approximately correct (PAC)-semantics.