Cognitive biases in natural language: Automatically detecting, differentiating, and measuring bias in text

IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Kyrtin Atreides, David J. Kelley
{"title":"Cognitive biases in natural language: Automatically detecting, differentiating, and measuring bias in text","authors":"Kyrtin Atreides,&nbsp;David J. Kelley","doi":"10.1016/j.cogsys.2024.101304","DOIUrl":null,"url":null,"abstract":"<div><div>We examine preliminary results from the first automated system to detect the 188 cognitive biases included in the 2016 Cognitive Bias Codex, as applied to both human and AI-generated text, and compared to a human baseline of performance. The human baseline was constructed from the collective intelligence of a small but diverse group of volunteers independently submitting their detected cognitive biases for each sample in the task used for the first phase. This baseline was used as an approximation of the ground truth on this task, for lack of any prior established and relevant benchmark. Results showed the system’s performance to be above that of the average human, but below that of the top-performing human and the collective, with greater performance on a subset of 18 out of the 24 categories in the codex. This version of the system was also applied to analyzing responses to 150 open-ended questions put to each of the top 5 performing closed and open-source Large Language Models, as of the time of testing. Results from this second phase showed measurably higher rates of cognitive bias detection across roughly half of all categories than those observed when analyzing human-generated text. The level of model contamination was also considered for two types of contamination observed, where the models gave canned responses. Levels of cognitive bias detected in each model were compared both to one another and to data from the first phase.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101304"},"PeriodicalIF":2.1000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000986","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

We examine preliminary results from the first automated system to detect the 188 cognitive biases included in the 2016 Cognitive Bias Codex, as applied to both human and AI-generated text, and compared to a human baseline of performance. The human baseline was constructed from the collective intelligence of a small but diverse group of volunteers independently submitting their detected cognitive biases for each sample in the task used for the first phase. This baseline was used as an approximation of the ground truth on this task, for lack of any prior established and relevant benchmark. Results showed the system’s performance to be above that of the average human, but below that of the top-performing human and the collective, with greater performance on a subset of 18 out of the 24 categories in the codex. This version of the system was also applied to analyzing responses to 150 open-ended questions put to each of the top 5 performing closed and open-source Large Language Models, as of the time of testing. Results from this second phase showed measurably higher rates of cognitive bias detection across roughly half of all categories than those observed when analyzing human-generated text. The level of model contamination was also considered for two types of contamination observed, where the models gave canned responses. Levels of cognitive bias detected in each model were compared both to one another and to data from the first phase.
自然语言中的认知偏差:自动检测、区分和测量文本中的偏见
我们研究了首个自动系统的初步结果,该系统可检测 2016 年《认知偏差规范》中包含的 188 种认知偏差,既适用于人类文本,也适用于人工智能生成的文本,并与人类性能基线进行了比较。人类基线由一小群不同的志愿者的集体智慧构建而成,他们在第一阶段的任务中针对每个样本独立提交了检测到的认知偏差。由于缺乏事先确定的相关基准,该基线被用作该任务的基本真相的近似值。结果表明,该系统的性能高于普通人,但低于表现最好的人和集体,在法典 24 个类别中的 18 个子集上表现更好。这一版本的系统还用于分析对 150 个开放式问题的回答,这些问题是在测试时分别向性能最高的 5 个封闭式和开源大型语言模型提出的。第二阶段的结果显示,在大约一半的类别中,认知偏差的检测率明显高于分析人工生成文本时的检测率。在观察到的两类污染中,还考虑了模型污染的程度,在这两类污染中,模型给出的是罐装答案。每个模型中检测到的认知偏差水平都相互进行了比较,并与第一阶段的数据进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Systems Research
Cognitive Systems Research 工程技术-计算机:人工智能
CiteScore
9.40
自引率
5.10%
发文量
40
审稿时长
>12 weeks
期刊介绍: Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial. The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition. Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信