聊天机器人是可靠的文本注释器吗?有时

IF 2.2 Q2 MULTIDISCIPLINARY SCIENCES
Ross Deans Kristensen-McLachlan, Miceal Canavan, Marton Kárdos, Mia Jacobsen, Lene Aarøe
{"title":"聊天机器人是可靠的文本注释器吗?有时","authors":"Ross Deans Kristensen-McLachlan, Miceal Canavan, Marton Kárdos, Mia Jacobsen, Lene Aarøe","doi":"10.1093/pnasnexus/pgaf069","DOIUrl":null,"url":null,"abstract":"<p><p>Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product, which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared with supervised classification models. Using a new dataset of tweets from US news media and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks.</p>","PeriodicalId":74468,"journal":{"name":"PNAS nexus","volume":"4 4","pages":"pgaf069"},"PeriodicalIF":2.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11954583/pdf/","citationCount":"0","resultStr":"{\"title\":\"Are chatbots reliable text annotators? Sometimes.\",\"authors\":\"Ross Deans Kristensen-McLachlan, Miceal Canavan, Marton Kárdos, Mia Jacobsen, Lene Aarøe\",\"doi\":\"10.1093/pnasnexus/pgaf069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product, which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared with supervised classification models. Using a new dataset of tweets from US news media and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks.</p>\",\"PeriodicalId\":74468,\"journal\":{\"name\":\"PNAS nexus\",\"volume\":\"4 4\",\"pages\":\"pgaf069\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11954583/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PNAS nexus\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/pnasnexus/pgaf069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PNAS nexus","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/pnasnexus/pgaf069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究突显了 ChatGPT 在社会科学研究文本注释方面的巨大潜力。然而,ChatGPT 是一种封闭源代码产品,在透明度、可重现性、成本和数据保护方面存在重大缺陷。开源(OS)大语言模型(LLM)的最新进展提供了一个没有这些缺点的替代方案。因此,评估操作系统大型语言模型相对于 ChatGPT 和标准监督机器学习分类方法的性能非常重要。我们对一系列操作系统 LLM 与 ChatGPT 的性能进行了系统的比较评估,使用了零点学习和少量学习以及通用和自定义提示,并将结果与监督分类模型进行了比较。我们使用来自美国新闻媒体的新推文数据集,专注于简单的二进制文本注释任务,发现 ChatGPT 和操作系统模型在不同任务中的性能差异很大,而使用 DistilBERT 的监督分类器的性能普遍优于两者。鉴于 ChatGPT 不可靠的性能及其对开放科学带来的巨大挑战,我们建议在使用 ChatGPT 执行实质性文本标注任务时要谨慎。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Are chatbots reliable text annotators? Sometimes.

Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product, which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared with supervised classification models. Using a new dataset of tweets from US news media and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信