多即是多:大型语言模型中的加法偏差

Luca Santagata , Cristiano De Nobili
{"title":"多即是多:大型语言模型中的加法偏差","authors":"Luca Santagata ,&nbsp;Cristiano De Nobili","doi":"10.1016/j.chbah.2025.100129","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100129"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"More is more: Addition bias in large language models\",\"authors\":\"Luca Santagata ,&nbsp;Cristiano De Nobili\",\"doi\":\"10.1016/j.chbah.2025.100129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"3 \",\"pages\":\"Article 100129\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们研究了大型语言模型(LLMs)中存在的加法偏差,并将其与在人类中观察到的认知偏差进行了类比,在人类中,个体倾向于支持加法而不是减法变化[3]。通过一系列的对照实验,我们测试了各种llm,包括GPT-3.5 Turbo、Claude 3.5 Sonnet、Mistral、MathΣtral和Llama 3.1,旨在测量它们对加法和减法修饰的倾向。我们的研究结果表明,在所有被测试的模型中,加性变化具有显著的偏好。例如,在一个创建回文的任务中,Llama 3.1在97.85%的情况下倾向于添加字母,而不是删除它们。同样,在乐高平衡塔的任务中,GPT-3.5 Turbo在76.38%的情况下选择添加一块砖,而不是移除一块砖。在文本摘要任务中,当被要求提高自己或他人的写作水平时,Mistral 7B在59.40%-75.10%的情况下产生了更长的摘要。这些结果表明,与人类相似,llm表现出明显的加法偏差,这可能会影响llm在大规模使用时的影响。附加性偏见可能增加资源使用和环境影响,导致过度消费和浪费造成更高的经济成本。在法学硕士的开发和应用中应该考虑到这种偏见,以确保平衡和有效的解决问题的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
More is more: Addition bias in large language models
In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, MathΣtral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信