A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks

Rachel M. Harrison
{"title":"A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks","authors":"Rachel M. Harrison","doi":"arxiv-2408.09656","DOIUrl":null,"url":null,"abstract":"Random Number Generation Tasks (RNGTs) are used in psychology for examining\nhow humans generate sequences devoid of predictable patterns. By adapting an\nexisting human RNGT for an LLM-compatible environment, this preliminary study\ntests whether ChatGPT-3.5, a large language model (LLM) trained on\nhuman-generated text, exhibits human-like cognitive biases when generating\nrandom number sequences. Initial findings indicate that ChatGPT-3.5 more\neffectively avoids repetitive and sequential patterns compared to humans, with\nnotably lower repeat frequencies and adjacent number frequencies. Continued\nresearch into different models, parameters, and prompting methodologies will\ndeepen our understanding of how LLMs can more closely mimic human random\ngeneration behaviors, while also broadening their applications in cognitive and\nbehavioral science research.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Random Number Generation Tasks (RNGTs) are used in psychology for examining how humans generate sequences devoid of predictable patterns. By adapting an existing human RNGT for an LLM-compatible environment, this preliminary study tests whether ChatGPT-3.5, a large language model (LLM) trained on human-generated text, exhibits human-like cognitive biases when generating random number sequences. Initial findings indicate that ChatGPT-3.5 more effectively avoids repetitive and sequential patterns compared to humans, with notably lower repeat frequencies and adjacent number frequencies. Continued research into different models, parameters, and prompting methodologies will deepen our understanding of how LLMs can more closely mimic human random generation behaviors, while also broadening their applications in cognitive and behavioral science research.
随机数生成任务中大型语言模型与人类表现的比较
随机数生成任务(RNGT)在心理学中被用于研究人类如何生成没有可预测模式的序列。本初步研究通过将现有的人类 RNGT 改编为与 LLM 兼容的环境,测试了以人类生成的文本为基础训练的大语言模型(LLM)ChatGPT-3.5 在生成随机数序列时是否表现出与人类类似的认知偏差。初步研究结果表明,与人类相比,ChatGPT-3.5 能更有效地避免重复和顺序模式,重复频率和相邻数字频率明显较低。对不同模型、参数和提示方法的继续研究将加深我们对 LLM 如何更接近地模仿人类随机生成行为的理解,同时也将拓宽它们在认知和行为科学研究中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信