国际象棋中的图灵测试:揭示人类主观能动性作用的实验

IF 4.9 Q1 PSYCHOLOGY, EXPERIMENTAL
Yke Bauke Eisma, Robin Koerts, Joost de Winter
{"title":"国际象棋中的图灵测试:揭示人类主观能动性作用的实验","authors":"Yke Bauke Eisma,&nbsp;Robin Koerts,&nbsp;Joost de Winter","doi":"10.1016/j.chbr.2024.100496","DOIUrl":null,"url":null,"abstract":"<div><div>With the growing capabilities of AI, technology is increasingly able to match or even surpass human performance. In the current study, focused on the game of chess, we investigated whether chess players could distinguish whether they were playing against a human or a computer, and how they achieved this. A total of 24 chess players each played eight 5 + 0 Blitz games from different starting positions. They played against (1) a human, (2) Maia, a neural network-based chess engine trained to play in a human-like manner, (3) Stockfish 16, the best chess engine available, downgraded to play at a lower level, and (4) Stockfish 16 at its maximal level. The opponent’s move time was fixed at 10 s. During the game, participants verbalized their thoughts, and after each game, they indicated by means of a questionnaire whether they thought they had played against a human or a machine and if there were particular moves that revealed the nature of the opponent. The results showed that Stockfish at the highest level was usually correctly identified as an engine, while Maia was often incorrectly identified as a human. The moves of the downgraded Stockfish were relatively often labeled as ‘strange’ by the participants. In conclusion, the Turing test, as applied here in a domain where computers can perform superhumanly, is essentially a test of whether the chess computer can devise suboptimal moves that correspond to human moves, and not necessarily a test of computer intelligence.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"16 ","pages":"Article 100496"},"PeriodicalIF":4.9000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Turing tests in chess: An experiment revealing the role of human subjectivity\",\"authors\":\"Yke Bauke Eisma,&nbsp;Robin Koerts,&nbsp;Joost de Winter\",\"doi\":\"10.1016/j.chbr.2024.100496\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the growing capabilities of AI, technology is increasingly able to match or even surpass human performance. In the current study, focused on the game of chess, we investigated whether chess players could distinguish whether they were playing against a human or a computer, and how they achieved this. A total of 24 chess players each played eight 5 + 0 Blitz games from different starting positions. They played against (1) a human, (2) Maia, a neural network-based chess engine trained to play in a human-like manner, (3) Stockfish 16, the best chess engine available, downgraded to play at a lower level, and (4) Stockfish 16 at its maximal level. The opponent’s move time was fixed at 10 s. During the game, participants verbalized their thoughts, and after each game, they indicated by means of a questionnaire whether they thought they had played against a human or a machine and if there were particular moves that revealed the nature of the opponent. The results showed that Stockfish at the highest level was usually correctly identified as an engine, while Maia was often incorrectly identified as a human. The moves of the downgraded Stockfish were relatively often labeled as ‘strange’ by the participants. In conclusion, the Turing test, as applied here in a domain where computers can perform superhumanly, is essentially a test of whether the chess computer can devise suboptimal moves that correspond to human moves, and not necessarily a test of computer intelligence.</div></div>\",\"PeriodicalId\":72681,\"journal\":{\"name\":\"Computers in human behavior reports\",\"volume\":\"16 \",\"pages\":\"Article 100496\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in human behavior reports\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2451958824001295\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958824001295","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能能力的不断提高,技术越来越能够与人类的表现相媲美,甚至超越人类。在本研究中,我们以国际象棋游戏为重点,调查了棋手能否分辨出他们的对手是人类还是电脑,以及他们是如何做到这一点的。共有 24 位棋手从不同的起始位置开始,每人下了八盘 5+0 闪电战。他们的对手分别是:(1) 人类;(2) Maia(一种基于神经网络的国际象棋引擎,经过训练后能以类似人类的方式下棋);(3) Stockfish 16(目前最好的国际象棋引擎,经过降级后能以较低水平下棋);(4) Stockfish 16 的最高水平。在对局过程中,参赛者用语言表达自己的想法,并在每局对局结束后通过问卷调查的方式表明他们认为自己的对手是人类还是机器,以及是否有特定的棋步暴露了对手的性质。结果显示,最高级别的 Stockfish 通常被正确地识别为引擎,而 Maia 则经常被错误地识别为人类。被降级的 "斯托克鱼 "的棋步则相对较多地被参赛者称为 "奇怪"。总之,图灵测试应用于计算机可以超人发挥的领域,本质上是测试国际象棋计算机能否设计出与人类棋步相对应的次优棋步,而不一定是对计算机智力的测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Turing tests in chess: An experiment revealing the role of human subjectivity
With the growing capabilities of AI, technology is increasingly able to match or even surpass human performance. In the current study, focused on the game of chess, we investigated whether chess players could distinguish whether they were playing against a human or a computer, and how they achieved this. A total of 24 chess players each played eight 5 + 0 Blitz games from different starting positions. They played against (1) a human, (2) Maia, a neural network-based chess engine trained to play in a human-like manner, (3) Stockfish 16, the best chess engine available, downgraded to play at a lower level, and (4) Stockfish 16 at its maximal level. The opponent’s move time was fixed at 10 s. During the game, participants verbalized their thoughts, and after each game, they indicated by means of a questionnaire whether they thought they had played against a human or a machine and if there were particular moves that revealed the nature of the opponent. The results showed that Stockfish at the highest level was usually correctly identified as an engine, while Maia was often incorrectly identified as a human. The moves of the downgraded Stockfish were relatively often labeled as ‘strange’ by the participants. In conclusion, the Turing test, as applied here in a domain where computers can perform superhumanly, is essentially a test of whether the chess computer can devise suboptimal moves that correspond to human moves, and not necessarily a test of computer intelligence.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信