Methodological foundations for artificial intelligence-driven survey question generation

IF 3.9 2区 工程技术 Q1 EDUCATION & EDUCATIONAL RESEARCH
Ted K. Mburu, Kangxuan Rong, Campbell J. McColley, Alexandra Werth
{"title":"Methodological foundations for artificial intelligence-driven survey question generation","authors":"Ted K. Mburu,&nbsp;Kangxuan Rong,&nbsp;Campbell J. McColley,&nbsp;Alexandra Werth","doi":"10.1002/jee.70012","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question–Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants.</p>\n </section>\n \n <section>\n \n <h3> Design</h3>\n \n <p>We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.</p>\n </section>\n </div>","PeriodicalId":50206,"journal":{"name":"Journal of Engineering Education","volume":"114 3","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jee.70012","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Background

This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability.

Purpose

We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question–Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants.

Design

We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool.

Results

We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys.

Conclusions

While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.

人工智能驱动调查问题生成的方法学基础
本研究探讨了使用大型语言模型来创建自适应的、与上下文相关的调查问题,旨在提高教育研究中的数据质量,同时不限制可扩展性。我们提供了逐步开发由人工智能(AI)驱动的动态调查工具的方法,并引入了综合问答分析(SQRA)框架,该框架旨在帮助评估人工智能生成的问题,然后再与人类参与者一起部署。我们检查了我们的调查工具产生的问题,并比较了通过我们的SQRA框架生成的人工智能与人工智能之间的互动。活动理论为研究人工智能和参与者之间的动态互动提供了一个理论视角,突出了调查工具内的相互影响。我们发现人工智能生成的问题具有上下文相关性和适应性,成功地纳入了特定课程的参考资料。然而,诸如重复措辞、双重问题和术语等问题影响了问题的清晰度。尽管SQRA框架在复制人类反应可变性方面表现出局限性,但其迭代改进过程被证明在提高问题质量方面是有效的,从而增强了该方法在增强人工智能驱动的调查中的实用性。虽然人工智能驱动的问题生成可以增强开放式调查提示的可扩展性和个性化,但需要更多的研究来建立高质量教育研究的最佳实践。SQRA框架展示了对人工智能生成的调查内容进行及时改进和初始验证的实用功能,但它无法复制人类的反应。我们强调迭代提示工程的重要性,伦理考虑,以及在开发可信赖的人工智能驱动的教育研究调查工具时方法进步的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Engineering Education
Journal of Engineering Education 工程技术-工程:综合
CiteScore
12.20
自引率
11.80%
发文量
47
审稿时长
>12 weeks
期刊介绍: The Journal of Engineering Education (JEE) serves to cultivate, disseminate, and archive scholarly research in engineering education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信