Marek Urban , Jiří Lukavský , Cyril Brom , Veronika Hein , Filip Svacha , Filip Děchtěrenko , Kamila Urban
{"title":"Prompting for creative problem-solving: A process-mining study","authors":"Marek Urban , Jiří Lukavský , Cyril Brom , Veronika Hein , Filip Svacha , Filip Děchtěrenko , Kamila Urban","doi":"10.1016/j.learninstruc.2025.102156","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Although generative-AI systems are increasingly used to solve non-routine problems, effective prompting strategies remain largely underexplored.</div></div><div><h3>Aims</h3><div>The present study investigates how university students prompt ChatGPT to solve complex ill-defined problems, specifically examining which prompts are associated with higher or lower problem-solving performance.</div></div><div><h3>Sample</h3><div>Seventy-seven university students (53 women; <em>M</em><sub>age</sub> = 22.4 years) participated in the study.</div></div><div><h3>Methods</h3><div>To identify various prompt types employed by students, the study utilized qualitative analysis of interactions with ChatGPT 3.5 during the resolution of the creative problem-solving task. Participants’ performance was measured by the quality, elaboration, and originality of their ideas. Subsequently, two-step clustering was employed to identify groups of low- and high-performing students. Finally, process-mining techniques (heuristics miner) were used to analyze the interactions of low- and high-performing students.</div></div><div><h3>Results</h3><div>The findings suggest that including clear evaluation criteria when prompting ChatGPT to generate ideas (<em>r</em><sub>s</sub> = .38), providing ChatGPT with an elaborated context for idea generation (<em>r</em><sub>s</sub> = .47), and offering specific feedback (<em>r</em><sub>s</sub> = .45), enhances the quality, elaboration, and originality of the solutions. Successful problem-solving involves iterative human-AI regulation, with high performers using an overall larger number of prompts (<em>d</em> = .82). High performers interacted with ChatGPT through dialogue, where they monitored and regulated the generation of ideas, while low performers used ChatGPT as an information resource.</div></div><div><h3>Conclusions</h3><div>These results emphasize the importance of active and iterative engagement for creative problem-solving and suggest that educational practices should foster metacognitive monitoring and regulation to maximize the benefits of human-AI collaboration.</div></div>","PeriodicalId":48357,"journal":{"name":"Learning and Instruction","volume":"99 ","pages":"Article 102156"},"PeriodicalIF":4.7000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learning and Instruction","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0959475225000805","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Although generative-AI systems are increasingly used to solve non-routine problems, effective prompting strategies remain largely underexplored.
Aims
The present study investigates how university students prompt ChatGPT to solve complex ill-defined problems, specifically examining which prompts are associated with higher or lower problem-solving performance.
Sample
Seventy-seven university students (53 women; Mage = 22.4 years) participated in the study.
Methods
To identify various prompt types employed by students, the study utilized qualitative analysis of interactions with ChatGPT 3.5 during the resolution of the creative problem-solving task. Participants’ performance was measured by the quality, elaboration, and originality of their ideas. Subsequently, two-step clustering was employed to identify groups of low- and high-performing students. Finally, process-mining techniques (heuristics miner) were used to analyze the interactions of low- and high-performing students.
Results
The findings suggest that including clear evaluation criteria when prompting ChatGPT to generate ideas (rs = .38), providing ChatGPT with an elaborated context for idea generation (rs = .47), and offering specific feedback (rs = .45), enhances the quality, elaboration, and originality of the solutions. Successful problem-solving involves iterative human-AI regulation, with high performers using an overall larger number of prompts (d = .82). High performers interacted with ChatGPT through dialogue, where they monitored and regulated the generation of ideas, while low performers used ChatGPT as an information resource.
Conclusions
These results emphasize the importance of active and iterative engagement for creative problem-solving and suggest that educational practices should foster metacognitive monitoring and regulation to maximize the benefits of human-AI collaboration.
期刊介绍:
As an international, multi-disciplinary, peer-refereed journal, Learning and Instruction provides a platform for the publication of the most advanced scientific research in the areas of learning, development, instruction and teaching. The journal welcomes original empirical investigations. The papers may represent a variety of theoretical perspectives and different methodological approaches. They may refer to any age level, from infants to adults and to a diversity of learning and instructional settings, from laboratory experiments to field studies. The major criteria in the review and the selection process concern the significance of the contribution to the area of learning and instruction, and the rigor of the study.