Delving into the psychology of Machines: Exploring the structure of self-regulated learning via LLM-generated survey responses

IF 8.9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Leonie V.D.E. Vogelsmeier , Eduardo Oliveira , Kamila Misiejuk , Sonsoles López-Pernas , Mohammed Saqr
{"title":"Delving into the psychology of Machines: Exploring the structure of self-regulated learning via LLM-generated survey responses","authors":"Leonie V.D.E. Vogelsmeier ,&nbsp;Eduardo Oliveira ,&nbsp;Kamila Misiejuk ,&nbsp;Sonsoles López-Pernas ,&nbsp;Mohammed Saqr","doi":"10.1016/j.chb.2025.108769","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) offer the potential to simulate human-like responses and behaviors, creating new opportunities for psychological science. In the context of self-regulated learning (SRL), if LLMs can reliably simulate survey responses at scale and speed, they could be used to test intervention scenarios, refine theoretical models, augment sparse datasets, and represent hard-to-reach populations. However, the validity of LLM-generated survey responses remains uncertain, with limited research focused on SRL and existing studies beyond SRL yielding mixed results. Therefore, in this study, we examined LLM-generated responses to the 44-item Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich &amp; De Groot, 1990), a widely used instrument assessing students’ learning strategies and academic motivation. Particularly, we used the LLMs GPT-4o, Claude 3.7 Sonnet, Gemini 2 Flash, LLaMA 3.1–8B, and Mistral Large. We analyzed item distributions, the psychological network of the theoretical SRL dimensions, and psychometric validity based on the latent factor structure. Our results suggest that Gemini 2 Flash was the most promising LLM, showing considerable sampling variability and producing plausible underlying dimensions and theoretical relationships that are partly aligned with prior theory and empirical findings. At the same time, we observed discrepancies and limitations, underscoring both the potential and current constraints of using LLMs for simulating psychological survey data and applying it in educational contexts.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"173 ","pages":"Article 108769"},"PeriodicalIF":8.9000,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322500216X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) offer the potential to simulate human-like responses and behaviors, creating new opportunities for psychological science. In the context of self-regulated learning (SRL), if LLMs can reliably simulate survey responses at scale and speed, they could be used to test intervention scenarios, refine theoretical models, augment sparse datasets, and represent hard-to-reach populations. However, the validity of LLM-generated survey responses remains uncertain, with limited research focused on SRL and existing studies beyond SRL yielding mixed results. Therefore, in this study, we examined LLM-generated responses to the 44-item Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich & De Groot, 1990), a widely used instrument assessing students’ learning strategies and academic motivation. Particularly, we used the LLMs GPT-4o, Claude 3.7 Sonnet, Gemini 2 Flash, LLaMA 3.1–8B, and Mistral Large. We analyzed item distributions, the psychological network of the theoretical SRL dimensions, and psychometric validity based on the latent factor structure. Our results suggest that Gemini 2 Flash was the most promising LLM, showing considerable sampling variability and producing plausible underlying dimensions and theoretical relationships that are partly aligned with prior theory and empirical findings. At the same time, we observed discrepancies and limitations, underscoring both the potential and current constraints of using LLMs for simulating psychological survey data and applying it in educational contexts.
深入研究机器心理学:通过法学硕士生成的调查回应探索自我调节学习的结构
大型语言模型(llm)提供了模拟人类反应和行为的潜力,为心理科学创造了新的机会。在自我调节学习(SRL)的背景下,如果llm能够以规模和速度可靠地模拟调查反应,它们可以用于测试干预方案,完善理论模型,增加稀疏数据集,并代表难以到达的人群。然而,llm产生的调查回答的有效性仍然不确定,有限的研究集中在SRL和现有的SRL之外的研究得出了不同的结果。因此,在本研究中,我们考察了法学硕士对包含44个项目的学习动机策略问卷(MSLQ; Pintrich & De Groot, 1990)的回答。MSLQ是一种广泛使用的评估学生学习策略和学习动机的工具。特别地,我们使用了LLMs gpt - 40、Claude 3.7 Sonnet、Gemini 2 Flash、LLaMA 3.1-8B和Mistral Large。我们分析了项目分布、理论SRL维度的心理网络以及基于潜在因素结构的心理效度。我们的研究结果表明,Gemini 2 Flash是最有前途的LLM,显示出相当大的采样可变性,并产生了合理的潜在维度和理论关系,这些维度和理论关系部分与先前的理论和经验发现一致。同时,我们观察到差异和局限性,强调了使用法学硕士模拟心理调查数据并将其应用于教育环境的潜力和当前的限制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信