Reto Gubelmann, Ioannis Katis, Christina Niklaus, Siegfried Handschuh
{"title":"捕捉自然语言推理的多样性:现有数据集的系统调查和两个新的基准","authors":"Reto Gubelmann, Ioannis Katis, Christina Niklaus, Siegfried Handschuh","doi":"10.1007/s10849-023-09410-4","DOIUrl":null,"url":null,"abstract":"<p>Transformer-based Pre-Trained Language Models currently dominate the field of Natural Language Inference (NLI). We first survey existing NLI datasets, and we systematize them according to the different kinds of logical inferences that are being distinguished. This shows two gaps in the current dataset landscape, which we propose to address with one dataset that has been developed in argumentative writing research as well as a new one building on syllogistic logic. Throughout, we also explore the promises of ChatGPT. Our results show that our new datasets do pose a challenge to existing methods and models, including ChatGPT, and that tackling this challenge via fine-tuning yields only partly satisfactory results.</p>","PeriodicalId":48732,"journal":{"name":"Journal of Logic Language and Information","volume":"36 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Capturing the Varieties of Natural Language Inference: A Systematic Survey of Existing Datasets and Two Novel Benchmarks\",\"authors\":\"Reto Gubelmann, Ioannis Katis, Christina Niklaus, Siegfried Handschuh\",\"doi\":\"10.1007/s10849-023-09410-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Transformer-based Pre-Trained Language Models currently dominate the field of Natural Language Inference (NLI). We first survey existing NLI datasets, and we systematize them according to the different kinds of logical inferences that are being distinguished. This shows two gaps in the current dataset landscape, which we propose to address with one dataset that has been developed in argumentative writing research as well as a new one building on syllogistic logic. Throughout, we also explore the promises of ChatGPT. Our results show that our new datasets do pose a challenge to existing methods and models, including ChatGPT, and that tackling this challenge via fine-tuning yields only partly satisfactory results.</p>\",\"PeriodicalId\":48732,\"journal\":{\"name\":\"Journal of Logic Language and Information\",\"volume\":\"36 1\",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2023-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Logic Language and Information\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10849-023-09410-4\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Logic Language and Information","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10849-023-09410-4","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Capturing the Varieties of Natural Language Inference: A Systematic Survey of Existing Datasets and Two Novel Benchmarks
Transformer-based Pre-Trained Language Models currently dominate the field of Natural Language Inference (NLI). We first survey existing NLI datasets, and we systematize them according to the different kinds of logical inferences that are being distinguished. This shows two gaps in the current dataset landscape, which we propose to address with one dataset that has been developed in argumentative writing research as well as a new one building on syllogistic logic. Throughout, we also explore the promises of ChatGPT. Our results show that our new datasets do pose a challenge to existing methods and models, including ChatGPT, and that tackling this challenge via fine-tuning yields only partly satisfactory results.
期刊介绍:
The scope of the journal is the logical and computational foundations of natural, formal, and programming languages, as well as the different forms of human and mechanized inference. It covers the logical, linguistic, and information-theoretic parts of the cognitive sciences.
Examples of main subareas are Intentional Logics including Dynamic Logic; Nonmonotonic Logic and Belief Revision; Constructive Logics; Complexity Issues in Logic and Linguistics; Theoretical Problems of Logic Programming and Resolution; Categorial Grammar and Type Theory; Generalized Quantification; Information-Oriented Theories of Semantic Structure like Situation Semantics, Discourse Representation Theory, and Dynamic Semantics; Connectionist Models of Logical and Linguistic Structures. The emphasis is on the theoretical aspects of these areas.