{"title":"社论:大型语言模型中的情感估计和认知谬误分析","authors":"Daniel E. O'Leary","doi":"10.1002/isaf.70010","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>This paper describes some experimentation with the evolving ability of large language models to generate sentiment estimates. We find that current models seem to equal or even exceed the ability of human annotators in a case study of single sentiment sentences. In addition, using the large language models, we were able to identify a small number of sentences in the data set, where it appears that the annotator made errors in assessing the sentiment. Unfortunately, analysis of the LLM results also illustrates apparent cognitive biases in the LLM behavior. Those effects appear to include an “ostrich effect” and a “no one is good enough” effect cognitive bias in LLM sentiment estimates.</p>\n </div>","PeriodicalId":53473,"journal":{"name":"Intelligent Systems in Accounting, Finance and Management","volume":"32 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Editorial: Analysis of Sentiment Estimates and Cognitive Fallacies in Large Language Models\",\"authors\":\"Daniel E. O'Leary\",\"doi\":\"10.1002/isaf.70010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>This paper describes some experimentation with the evolving ability of large language models to generate sentiment estimates. We find that current models seem to equal or even exceed the ability of human annotators in a case study of single sentiment sentences. In addition, using the large language models, we were able to identify a small number of sentences in the data set, where it appears that the annotator made errors in assessing the sentiment. Unfortunately, analysis of the LLM results also illustrates apparent cognitive biases in the LLM behavior. Those effects appear to include an “ostrich effect” and a “no one is good enough” effect cognitive bias in LLM sentiment estimates.</p>\\n </div>\",\"PeriodicalId\":53473,\"journal\":{\"name\":\"Intelligent Systems in Accounting, Finance and Management\",\"volume\":\"32 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Systems in Accounting, Finance and Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/isaf.70010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Economics, Econometrics and Finance\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems in Accounting, Finance and Management","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/isaf.70010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Economics, Econometrics and Finance","Score":null,"Total":0}
Editorial: Analysis of Sentiment Estimates and Cognitive Fallacies in Large Language Models
This paper describes some experimentation with the evolving ability of large language models to generate sentiment estimates. We find that current models seem to equal or even exceed the ability of human annotators in a case study of single sentiment sentences. In addition, using the large language models, we were able to identify a small number of sentences in the data set, where it appears that the annotator made errors in assessing the sentiment. Unfortunately, analysis of the LLM results also illustrates apparent cognitive biases in the LLM behavior. Those effects appear to include an “ostrich effect” and a “no one is good enough” effect cognitive bias in LLM sentiment estimates.
期刊介绍:
Intelligent Systems in Accounting, Finance and Management is a quarterly international journal which publishes original, high quality material dealing with all aspects of intelligent systems as they relate to the fields of accounting, economics, finance, marketing and management. In addition, the journal also is concerned with related emerging technologies, including big data, business intelligence, social media and other technologies. It encourages the development of novel technologies, and the embedding of new and existing technologies into applications of real, practical value. Therefore, implementation issues are of as much concern as development issues. The journal is designed to appeal to academics in the intelligent systems, emerging technologies and business fields, as well as to advanced practitioners who wish to improve the effectiveness, efficiency, or economy of their working practices. A special feature of the journal is the use of two groups of reviewers, those who specialize in intelligent systems work, and also those who specialize in applications areas. Reviewers are asked to address issues of originality and actual or potential impact on research, teaching, or practice in the accounting, finance, or management fields. Authors working on conceptual developments or on laboratory-based explorations of data sets therefore need to address the issue of potential impact at some level in submissions to the journal.