AI vs humans in the AUT: Simulations to LLMs

Ken Gilhooly
{"title":"AI vs humans in the AUT: Simulations to LLMs","authors":"Ken Gilhooly","doi":"10.1016/j.yjoc.2023.100071","DOIUrl":null,"url":null,"abstract":"<div><p>This paper reviews studies of proposed creative machines applied to a prototypical creative task, i.e., the Alternative Uses Task (AUT). Although one system (OROC) did simulate some aspects of human strategies for the AUT, most recent attempts have not been simulation-oriented, but rather have used Large Language Model (LLM) systems such as GPT-3 which embody extremely large connectionist networks trained on huge volumes of textual data. Studies reviewed here indicate that LLM based systems are performing on the AUT at near or somewhat above human levels in terms of scores on originality and usefulness. Moreover, similar patterns are found in the data of humans and LLM models in the AUT, such as output order effects and a negative association between originality and value or utility. However, it is concluded that GPT-3 and similar systems, despite generating novel and useful responses, do not display creativity as they lack agency and are purely algorithmic. LLM studies so far in this area have largely been exploratory and future studies should guard against possible training data contamination.</p></div>","PeriodicalId":100769,"journal":{"name":"Journal of Creativity","volume":"34 1","pages":"Article 100071"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2713374523000304/pdfft?md5=78fd4c3adc7b002913ad16e3d916aad4&pid=1-s2.0-S2713374523000304-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Creativity","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2713374523000304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper reviews studies of proposed creative machines applied to a prototypical creative task, i.e., the Alternative Uses Task (AUT). Although one system (OROC) did simulate some aspects of human strategies for the AUT, most recent attempts have not been simulation-oriented, but rather have used Large Language Model (LLM) systems such as GPT-3 which embody extremely large connectionist networks trained on huge volumes of textual data. Studies reviewed here indicate that LLM based systems are performing on the AUT at near or somewhat above human levels in terms of scores on originality and usefulness. Moreover, similar patterns are found in the data of humans and LLM models in the AUT, such as output order effects and a negative association between originality and value or utility. However, it is concluded that GPT-3 and similar systems, despite generating novel and useful responses, do not display creativity as they lack agency and are purely algorithmic. LLM studies so far in this area have largely been exploratory and future studies should guard against possible training data contamination.

AUT 中的人工智能与人类:从模拟到法学硕士
本文回顾了有关将创意机器应用于原型创意任务(即替代用途任务,AUT)的研究。虽然有一个系统(OROC)确实模拟了人类在 AUT 中的某些策略,但最近的大多数尝试都不是以模拟为导向的,而是使用了大型语言模型(LLM)系统,如 GPT-3,该系统体现了在大量文本数据基础上训练出来的超大型连接网络。本文回顾的研究表明,基于 LLM 的系统在 AUT 上的原创性和实用性评分接近或略高于人类水平。此外,在 AUT 中人类和 LLM 模型的数据中也发现了类似的模式,例如输出顺序效应以及原创性与价值或实用性之间的负相关。不过,结论是,GPT-3 和类似系统尽管能产生新颖和有用的反应,但由于缺乏代理性,而且纯粹是算法,因此并没有显示出创造性。迄今为止,该领域的 LLM 研究主要是探索性的,未来的研究应防止可能出现的训练数据污染。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信